 Today we are going to go over Psi Pi as the first set of things. We'll begin with Psi Pi. And we've already talked a little bit about Psi Pi. And as we discussed it earlier, there's multiple layers to this. We have Python as our language, NumPi as our data structures, and the array calculation capabilities that are added to Python. We went over that pretty heavily yesterday. And then Psi Pi sits on top of that and starts adding domain-specific calculation tools, so scientific algorithms. So as you can see here, there's quite a large list of packages that are available. And it also has a very vibrant community. And it's available under a BSD license, so it's available for use in pretty much any way you choose. And then the community is grown now to having 34 people that have SVN commit privileges. And SVN is the source control tool that we use. So there are a lot of people that have check-in privileges. That's really nice because that makes sure that the community is going to be around for a long time. There are a lot of people that are kind of tending the garden. It's also a very active mailing list. So you go to PsiPi.org. If you have questions, the community is really nice. It's a pleasant place to be. There are probably between the NumPi list, the PsiPi list, there's probably 30 or so messages a day. And so it's very active. A lot of people answer your questions. As far as packages, there's the special function package. That really provides wrappers for a couple of libraries out of NetLive called Cephas and Amos, I believe is the other one. And those are functions like vessel functions and ankle functions and airy functions, things like that. There's probably 100 or so special functions that are covered there. It's quite a large library. Signal processing, those are tools for building Butterworth filters and Chebyshev filters and also for applying those, doing convolutions, doing multidimensional convolutions with images, as well as doing things like designing filters with REMes algorithms and things like that. It's a reasonably, I think there's also tools for doing interpolation and that sort of thing in the signal processing library. For a transforms, we sit on top of at least three different libraries and depending upon which ones you want to compile into to PsiPi. But there's, I think it's DJBFFT is one of the libraries, which is a very fast algorithm, especially for RAIDX2 FFTs. And there's also FFTPAC, which is a more standard library that's all written in Fortran off of NetLib that provides basic support for FFTs. And then there's FFTW, which is fastest Fourier Transform in the West library. And that can be compiled in as well. However, that one's GPL'd. And so it's never compiled in by default. If you want to use that library, you need to turn on the support. I think that it's faster in some cases, but not all cases than DJBFFT. So there's really, unless you have a specific problem where it's much faster than on your problem set, you really don't need to compile that in. The default settings work just fine. Optimization, there are a number of libraries that are wrapped here. There's tools for doing unconstrained optimization, both linear and non-linear, and also not only single-dimensional but multi-dimensional. And then we also have a set of tools that was contributed within the last two or three years that do constrained optimization. So there's the cobyla tool set, and there's a Newton-based tool set, TNC. And so I think there are probably three options in that area for you to try. And as you probably already know, in optimization, you may try one set of algorithms on one problem, and it works really well. Then on the next problem, that one doesn't turn out to be the best. You have to try another one. So having multiple sets of optimization algorithms is worthwhile. There used to be general plotting that needs to be removed now from the slide. We've really decided as a community that sci-pi shouldn't be in the business of plotting anymore. The plotting is now handled by Matplotlib, and we use Chaco quite heavily in the plotting area as well. And sci-pi should really focus on the algorithm side of things. So that's what's happened. Sci-pi is really much more of a set of tools for just doing the calculations. Numerical integration. We had a problem yesterday where we did some integration during the tutorial session. You'll be glad to know that there's actually an algorithm there so you don't have to write your own every time. But there's some nice tools for doing numerical quadrature and things like that using some more advanced techniques for integration. Linear Algebra is a workhorse for a lot of different algorithms that you may have. And the Linear Algebra library sits on top of Laypack. It has a set of tools for doing different kinds of decompositions like singular value decompositions, eigenvalue decompositions, factorizations. I think there's a QR factorization. There's certainly an LU factorization tool. So it's fairly complete, and it's also very fast. It sits on top of a library. Well, it sits on top of any Laypack or Blas so you can compile in whichever one is on your platform if you have vendor-specific ones. The ones that we typically provide are sitting on top of a tool set called Atlas, which is a very fast Linear Algebra tool set, our library. And it's built by Clint Whaley out of the University of Texas in San Antonio now. So that library, the nice thing is somebody else is always improving it. They're always tuning it to new architectures. It makes it very, makes sure that PsyPy stays as fast as it can even as new algorithms or new platforms come out. We talked a little bit about PsyPy.io. We'll cover that a little today. That provides tools for reading and writing different types of file formats. So there are libraries there for helping you to read in ASCII file formats that may be columnar in format and you want to just read in that data. Also access to specific formats like reading and writing FORTRAN data and also reading and writing things like matrix market matrices. There's a library called of different kind of matrices at a place called matrix market. You can go on the web and get those. And they'll provide you matrices of different shapes of different formats so that you can test your algorithms against their set of matrices. Statistics, there's a wide range of distributions that are available both continuous and discreet. We'll go over those. Distributed computing, there is PsyPy.Cal. We've actually moved it out to Pasteur so to speak. It's now in the sandbox instead of being in the main module. And there's actually, I'll recommend another tool called, well, IPython has a new branch that's probably very close to completion by now. I think it's called the chainsaw branch because they just started cutting the code up in major ways. But what they provided with that is actually the ability to do remote execution. So you can have multiple IPython interpreters basically running on remote machines and be able to have a central server that sends them commands. So that would be the one that I would recommend, but it's not quite finished yet. So stay tuned, I guess, on that one. Fast execution. Weave is a tool that allows you to really integrate C++ code directly into your Python code and pass variables across from Python into C or C++ and then back into Python. It's a fairly nice tool, especially for prototyping. And so we'll go over that. Clustering algorithms. But I'm not sure that there's a whole lot more than that there. That's really a home to add more information theory type algorithms and clustering algorithms. And then there's a library for sparse matrices. The input-output module. I guess there are a couple of things to note here. Before you start using scipy.io, there may be a couple of other libraries to look at. Conrad Henson has a library called Scientific Python and it has a tool called NetCDF in it, which is quite heavily used in a number of communities for storing out large sets of data. There's another tool called PyTables. And PyTables is really cool. It sits on top of HDF. It's a file format that is heavily used for large data sets. It's a hierarchical data format. That's what HDF stands for. And PyTables provides a beautiful Python interface on top of this library so that you can read and write data into those formats fairly seamlessly. And it interfaces numeric arrays specifically into these libraries very well. So it's very easy to read and write numeric arrays into the libraries. It's extremely fast for querying your data out as well. So it performs much better in many problem sets than storing your data in a SQL type database. So you ought to look at that as an option. On the other hand, if you're just wanting to do some quick and dirty work and read some things in, then scipy provides the IO library that will provide you tools for reading and writing text arrays. You'll notice down there it also has the ability for storing out MATLAB files and that sort of thing. So if you, I guess the demo directory, it's not on the tutorial machines. We need to get it on there. But there's a speed of light file there that demos this. So you ought to take a look at that. We'll also just show a quick example. And I always have to look at this one or two times before I understand exactly what the columns and the lines are doing here. But what this provides you the ability to do is imagine you have the test data set up here where we have students test one, test two, and then we have a blank space, Jane, John, and Jim. And so what we're doing here is we're calling the function read array. We're handing it in the text file. And then we're telling it which columns we want to read. And let's see, this says read from column one to the end every second column. Oh, that's what this one says right here. So here we're going to read in all the columns starting from one and going to the very end of the file. And then we want to start on line three and we want to go to the very end of the file. And so that's going to read out this section here. Now if we want to do something more interesting where we just read out some of the values then we can specify a different set of offsets to say, okay, I want to read starting at column one and then the negative two here specifies to read every other value out along the columns and line start on line three and read every other line. So you can choose to pick out the values. What I'd encourage you to do is actually look at the documentation when we get into the tutorial session. There are a lot of different options on how you can set the column and lines flags. And there are two or three other flags that you can set on this to read in your data. But it's a fairly nice feature in that it allows you to, instead of you having to write a specific format or a specific tool to read in your specific format, you can read in your data using a more generic tool. So there's a library or there's a function object called Poly1D that's available. And it's really a representation of a polynomial. So you can create one of these objects by just handing in a list of coefficients. Here we've handed in one, negative two, and four. And so once you've done that and then you ask to print P, you'll notice that it prints it out in a fairly nice format, right? It tries to do a pretty printing job on the thing so that it's human readable. So here we have x squared had a coefficient of one, two x, that's a coefficient or negative two x, that's a coefficient of negative two. And then we're adding four on the end. So those are just the coefficients to the polynomial. And here we have a quadratic. You can also multiply polynomials or you can do math on these polynomial objects. So here we've cubed it and then multiplied it by three minus two times itself again. And so now once you've done that, you have a much larger equation there. And this allows you tools to kind of manipulate those polynomials in a very easy format. There's also tools for calculating the derivatives and the integrals. If you're trying to integrate a polynomial, you need to specify the constants of integration. That's what the k value is there on this example down here. And then finally there's a root finder. So you can ask for the roots of the equation as well as just returning the coefficients. It's a very easy little tool for using polynomials and we'll actually, I guess there's one example in the data fitting example that we'll use this library. Here's an example of using that. Imagine that we have our poly 1D. We've specified some coefficients here. We can print it out. Now we're going to specify a range for x. It's going from negative four to one, stepping by values of .05. And now we're going to plot that. So we have that blue curve that we plotted here on the plot and we'll set hold to true and then we'll ask for the roots and then we'll evaluate our polynomial again at those roots. And now one of the things that you, so here are the roots that we have. One of the things that you have to be careful about occasionally when plotting these values is when you ask for the roots, you'll end up with complex values. Even though these are real roots, they may end up with a complex part that's going to be one time sin to the negative 14 or something because the root finding algorithm ended up finding that value for the complex value. That's just the nature of doing floating point computations. As such, it's nice when you do your plot to go ahead and call or ask for the real part even if you don't know, even if you think you may not have to, especially if you're making a library because in that case you're going to end up with, a lot of times people will want only the real part of what they're doing. Of course, if you think you have imaginary roots, then you can ask for the imaginary part as well. Here we've just thrown away that part because we know we have real roots and plotted the values. And it looks pretty good. These guys are indeed on the zeros. So, found the right answer. FFTs. So, there's a large set of algorithms or I guess there's a large set of functions that just deal with trying to manipulate the data for FFTs quite a bit. So, here we show a little bit of this. We create a signal here, x, based on, let's see, calling the FFT frequency method. I actually am not familiar with that piece but down here once we've gotten x, we'll take the FFT of x and then calculate what the exact value should be and then actually show the values here. Now the FFT shift, you know when you take an FFT a lot of times you end up going from negative pi to pi and so you have a curve that is where you have your peaks on the ends. But a lot of times when you plot things you actually want to see the peaks in the middle and so the FFT shift will rearrange your data so that you have that peak in the middle and you can look at it there. And now we plotted the real value for our x versus the exact value and indeed we see here that they match up fairly well. Linear algebra. So there's a large set of tools here for doing different kinds of solvers. All of this as I mentioned wraps around the BLOSS libraries. There are actually two BLOSS libraries for some of the functions. Some of them have what's called the C BLOSS and some are the F BLOSS and those stand for C routines and Fortran. And the difference between the two is they expect things in C major or column major order for the Fortran routines and row major order for the C routines. There's also a lin-alge.fla-pack and also C-lay-pack. The C-lay-pack is not as complete as the F-lay-pack. A lot of times or the number of times that I've called these underlying libraries is very small just when we were wrapping them and creating them did we use them very much. Really there's this second layer of libraries or functions sitting over the top of these and these are listed down here where we have the inverse, sol, determinant, norm, least squares. These algorithms are all algorithms that sit on top and call the underlying lay-pack routines so that you don't have to do the grunt work of understanding what the calling conventions for those guys are. A lot of these problem sets, like if we do decomposition, the solve most likely uses an LU decomposition to do the LU factorization and then do the back substitution if you're familiar with the linear algebra process for solving a linear equation and that's a very common way to do it. But solve just wraps it up and makes it much more easy to call. There are a lot of decomposition methods here. I've listed those out. Eigenvalue and LUs, SVD, these are all very common tools and then there's also a set of tools for doing matrix operations on matrix objects and we talked about this a little earlier. Yesterday we talked about creating the matrix objects and then when you took the exponential of a matrix object it went ahead and actually did the matrix exponential and so it's probably calling into this library right here when it does that. Actually I guess in NumPy we have... NumPy it's worth mentioning here comes with a baby version of the Laypack libraries that is much smaller, much less complete than this. It's all in Fortran or CE, I can't remember which one it is but it's compiled to give you ease of use, right? I mean when you get NumPy you also get this set and so that's probably providing the tools for the matrix extiminations and things that it uses. However, that library is not nearly as fast as this library is because it's typically... I think actually you can link it with Atlas but it doesn't come that way when you download it from the site whereas the versions that we distribute of SciPy are linked in with Atlas. Let's look at using the linear algebra module a little bit. Here's an example of doing LU factorization and LU factorization is just the process of factoring a matrix into a lower triangular part and an upper triangular part and when you call it, here we've created a matrix we've called it and when we call LU factor this is actually calling F get res which is a lay pack routine under the covers and it returns to you LU and also a set of pivots so when a lot of times in linear algebra routines you have to keep a pivot table of how you've moved your rows around during the process of solving the matrix instead of actually going through and taking the time to rearrange all these rows which could take some time and move in the memory around we just return the same thing that the Fortran library does it returns to you the matrix factored and also this pivot table and so these two then can be used together and here we pass them in as a single argument just as a tuple into LU solve and we have a right hand side array which is solving an AX equal B type problem A times X and here's our X down here that we have AX is equal to this B array and so we've solved for our B and it provides a pretty easy interface for doing that sort of thing the time when you're going to want to use this and I may go over it but the time you're going to want to do this process of doing this factorization is when you have a lot of right hand sides you want to factor your matrix once and then have a lot of different B's that you're going to try out so if you have the slow part of doing this operation is the LU factorization the LU solve is very quick the back substitution that occurs so you come up here and you factor your matrix once and then you can call this a thousand times and it's very very fast so that's a typical use also doing eigenvalues that are asking for the eigenvalues and vectors from a matrix so we have the same matrix here and then we can call the eigenfunction and that's going to return to us all the values all the eigenvalues and all the eigenvectors for that function and here we've listed out what are the eigenvalues and then we've asked for the first eigenvector and then we just there's a Lin-Alge norm method that's going to calculate what is the norm of the array or of a vector and the norm of an eigenvalue ought to always be one and luckily here it's right we have that one down here so we've already talked about the matrix object a little bit but it uses a lot of these capabilities that we see here and also that are listed here this does the LUD composition back substitution in one step for you and these are providing the same answers as each other and they're also providing the same answer here so there are a lot of different ways to do the same thing you just have to figure out and that's a good thing because for different problems if you're just solving this once you just want one function to get it done on the other hand if you need to do something a thousand times you'd like to do one fast part or one slow part and then do it a thousand times quickly and so this allows you to break it up in the two steps the whole library is organized that way the library has special functions there's a list there of different functions that are available the ones that seem to get used most heavily are things like the ankle and the vessel functions they're the airy functions there and here we're just using looking at a vessel function of the zeroth order here and plotting that out and so we can call this we created our x array stepping from 0 to 100 by 0.1 we just call the vessel function with our array and this returns to us j0 and then we can plot that an airy function example, same sort of thing here we're just looking at the airy functions all of these functions yesterday we worked with vectors and learned how to think about things in a vector way it's really hard to get out of the mentality of doing four loops and moving over to calling instead of looping over a vector that's named x actually just passing that x into a function or slicing everything out and using it that way so now that you have that in your head and that's how you're going to do everything from now on you need to know how to write functions that will accept that so there's this set of tools there's a neat tool in NumPy called vectorize it's a function and it helps you get your functions that are built to take a single floating point value and produce a floating point value or take multiple floating point values and produce multiple floating point values and make them where they accept arrays as well and so we'll look at that and actually do a demo of that but all of these special functions if you look at the underlying version they all take floating, I mean we're just we didn't write a Bessel function in Python we're calling out to a library and so there's this vectorizing process on top of that that actually creates that loop for you so let's look here at an example of how you might do this and I'll talk about this vectorize example in the process here, we'll define this as a sync function and some of you actually did the example yesterday where there was a sync function so the sync here is going to take a floating point value it checks with it whether or not x is 0 and if it is it just returns 1 otherwise it computes w here and then sign of w over omega so computes an omega, sign of omega over omega and returns that value now what would happen if we just pass in a value a floating point value into this function then if it's 0 it returns 1 otherwise it returns some other floating point value but what happens if we pass an array into here we pass in x as an array of multiple values the first thing we do is we have an if statement here it says x equal equal 0 well if we have an array that's going to return an array of values and so if we have a is equal to if we say a equal equal 0 then we get an array back it's done what's called a rich comparison in python in that instead of just returning a true false value it's returned a more complex value an array that's just compared 0 to all of the individual items that's great for a lot of applications it turns out that it's problematic if you want to use it in a statement like this if we say print hello or something like that here you're going to get an exception and what it says here is if statements only allow a true false value an array it gives you an error so it tries to print out this was actually a source of a lot of bugs in code with numeric where people had put these arrays in if statements and they just return true because their array was not empty and python one of the things that tests when you do an if statement is it will return or if will a list here I guess we didn't go over this earlier but if you have let's do a different one B is equal to you can say if B print hello so when you do an if statement if an if statement test for whether the value is true or false but what it considers false is an empty list or an empty sequence or a none value or a zero anything else it considers true so when we had an array returned then that was considered true and it would always take that branch so we would in this case it would have passed in the array and it would have always pretty much taken this branch right here or actually I guess it would have taken this branch right here because it would always have returned an array it would have been nicer in NumPy it doesn't do that but we had the problem that a user has written this function and we can't call it right because we want to pass in our arrays so how do we do that well there's this nice little function called vectorize and so if we take this if we just define our sync and then else here and I'll just well we'll do the version of this that we had in the slide and then I need to return sine of w over w here so sync of 1 is going to be very small value, sync of 0 sync of 0.5 so now we can ask this now we have our a is equal to our a our a range I don't know 10 and now we try to take sync of a we're out of luck right because it causes that problem so we can use this handy dandy vectorize and that's actually found in PsiPy now it used to only be in I mean in NumPy it used to only be in PsiPy but now we can take a vectorized version of our v-sync function and now this vectorized v-sync if we hand in our array it gives us back all the values that doesn't look, did I is that right let's see here, oh yeah because maybe because they're all, what's that okay we'll do like a little more, so that gives us values and now we can plot a versus v-sync of a and we get the sync function so this is this is a helpful function that will come up quite a bit, pay attention to this it's really useful for taking also libraries that you've already written and making them useful here's a set of the continuous distribution functions Travis has put a whole lot of them in and just given you a sample here on the graphs there's also a large number of discrete distributions and they have the same set of methods available on them so here's an example of using these functions if we want to get the stats.norm is already an object we've created one of these normal distribution objects and assigned it to stats.norm so you do from sci-pi import stats then we're asking for a random value sample of size 100, so we get our samples back and then we plotted those over here and those look pretty random that's a random sample and we can ask, well what I really would like to do is calculate the pdf over this range from negative 5 to 5 and so we have our pdf here and we can ask for the cumulative distribution function and also the percent point function and those are all plotted there so that's a handy way of grouping these things you can use either set of libraries really random libraries in NumPy or the stats library here there's also a set of standard functions that are available there, a lot of these have actually migrated and become methods on a lot of the NumPy objects simply because they're used so much and those are the mean and the standard deviation and the variance calculations a function for calculating the moments, you can ask for a set of moments from a set of data and also skew and kurtosis which just calls the moments calculation under the covers for those interpolation, the interpolation methods there's a package called SciPyInterpolate and it has 1D linear interpolation class it also has some spline fitting classes that are designed over fit pack there's also a set of spline routines in the signal library that are fairly new and I think not very well exposed or documented so it may be more fruitful to use the interpolation routines out of the signal library I haven't used them very much, this is the set that I've used and here's an example of using the univariate spline class that's in interpolate, so you'll start noticing here that SciPy is kind of a mixture of using standard procedural programming where we have an input value and it also transfers it to some output value transforms it to some output value but we also use classes quite heavily in some places and here's an example so we create a value or an x array we then calculate a y array by calling sign on this x and then we create an object that's going to be a fit to this x, y values pair or x, y value pair and this is going to use a fifth order spline so you can use up to fifth order using the fit pack routines or the univariate spline class so we'll create a lens space here called xx and then these classes are just callable so that's one of the nice things about Python, it is a class but classes you have a special method, you remember that under under init methods, under under get add or methods, things like that that we saw or under under add I think we saw yesterday or the day before well there's also a method called under under call and so if you just call this one of these objects with an argument like we are here, we've created this object we just call it with xx, there's a function that gets called or a method that gets called and it returns a value here and so once we've gotten this value we plot the actual value versus the fit value for our spline and they match fairly well. The integrate module has some tools for doing integrating ordinary differential equations so you can use ODE for for that purpose, it also has some functions for things like trap z integration, it has Simpson methods and Romberg, these are more and more advanced kind of approximations to an integral there's also some much more advanced methods integrate quad is one of those and it's, I can't remember the name of the the library that it comes out of but it's a call into a lower level one of these things like fit pack and FFT pack, all of these pack names maybe it's integrate pack, I don't think that's what it is but there's, it performs Gaussian quadrature and it's a very advanced method and so if you look at it you know there's, it's funny on it's documentation if we go in from integrate or from sci-pi what's quad pack import integrate so integrate dot quad and we'll just look at the help on that so it gives quite a bit of help here but it has this funny thing run sci-pi dot integrate quad explain for more information about the esoteric inputs and outputs of this function so if you want to you can just run that function and you get even more stuff so it's so long I guess it needs two dock strings to be able to deal with it but there's a lot of information here about how to use this method there are tools for helping to do, a lot of times you don't just need to do 1D integration you may need to integrate a surface or a volume and so there's some methods that are really just wrapper methods you could do this on your own just by building functions that did this but double quad and triple quad wrap the quadrature function to help you do this sort of thing so as an example here we've defined a function or the goal here is really to compare sign to the integral of cosine and so here we've created a function called funk and notice that we pass in a value, if you look at integrate.quad a function here we'll just ask to integrate sign and then I'll go from 0 to 1 or whatever so this is doing an integration of sign on the range from 0 to 1 and you'll notice it returned two values and the first one of those values is the the answer and the second one of these values is a very small number here and that's the estimate on the error of the integration that it's returning so if we want that first or if we actually want the just the answer then we need to ask for the first value that's returned from that list and that's one of the ways you can do it it's returning a sequence we can index into that sequence directly now one of the issues here though that we'll see is if I make a is equal to an array and then I call the same thing because what I really like to do is calculate 0 to multiple different values right we want to plot across a range and we get that problem that we were seeing before that it doesn't actually handle floating point values so what we've done here there are multiple ways to do this and this is one of the slides that Travis has provided he just wrapped it in a function here and then he called integrate.quad handing so this just is really a simpler version right so we don't have to call or pass in the 0 value every time you just ask for the integral of x but look what he's done or the function of x and it will calculate the integral for you and hand back the result but look what he's done he's called this vectorize on this function so now we can call it with an array so now if you call vectfunk down here it's actually going to call that you can hand in an array it calls that multiple times as you were talking about looping over this for each of the values and then we get our y2 here and as we plot these against each other they're very very close it's a very accurate method of integrating especially on something like a sine or a cosine alright in the signal processing what's available there's general 2D convolutions and multi-dimensional convolutions there are also these B-splines that I was talking about so there's some tools for fitting there as well and then there's a number of filters that are available and tools for doing design of IR and FIR filters there's also a handy little class kind of like our polynomial class for dealing with linear time and variant systems so we'll show a few quick examples of a couple of these tools well this is a similar problem what we've done here is we've calculated a median filter in the same place so we've just grabbed the lina image we've cast her to a floating point float32 array we can display the image here and then we can call the median filter and what we've done is we've specified here what is the size of the kernel or the window that we're going to to calculate or to use as a filtering and this so because we have such a large window here we have a very smoothed out image or fuzzed out image here so that's one way of using this there's also here's another example and here we're going to use a wiener filter which is an optimal filter for normally distributed noise so we'll add the noise here we create we're using the normal the norm class out of the stats library we're telling it we want to create we want our values to be from zero or mean of zero values from and the standard deviation of 32 give me the random variable samples with this shape add those to lina we can show the image and it's a bit noisy here then we go through and we clean up the image and it looks a little better over here so there's a lot of tools for doing that sort of filtering alright linear time invariant system what you do is you specify the coefficients in the numerator and denominator of a linear system here and this is just the transfer function here on the right hand side and a lot of times for the linear time invariant systems what you want to ask for is if I have this system what is going to be my impulse response what is going to be my step response so we have the creation of the actual class the class is called LTI and we create an LTI object called LTI-SIS handing in the coefficients for a function over here and then we call the impulse method and it returns to us both the response here and the timing of the response so both the x and the y values here that we're going to plot and then we can ask for what is the step response here we're using that plot live to plot these values and then put a legend on the plot alright so optimization there are a large number of libraries here or functions here and some of these, most of these methods in the optimization library focus on gradient methods so there are local search algorithms there are a few that you'll see here the FMIM PAL and BFGS NCG, those are just different versions different ways of searching a space but all of them are local and they have different ways of getting out of local minimum and different ways of calculating the next search point in the space but there are a couple of methods listed down here one is a global optimization method it has kind of a cooling process that it simulates where you have to you start with a very high Boltzmann constant is what it's called in the annealing process that will kind of randomly throw values all over your search space and as it's a tuning parameter you cool your problem by turning down the Boltzmann constant and then it's, you know, what you're doing is you throw values all over the place, find the more optimal location and on the next iteration through you're throwing values in a smaller area because your Boltzmann constant disturbs it less and you basically it's kind of a cooling process where your value homes in on one area of the search space turns out that these cooling regimes that you come up with are quite complicated sometimes to get exactly right to make sure that you're getting a very good result for your problem set so that in itself is almost an optimization problem on some of these things but there is that annealing tool, there's also a brute force minimizer and this is not a recommended one I guess for actually solving real world problems because it's just that it's a brute force optimizer that goes through and just searches your entire space and says, yep, here's the minimum spot, you know, so but it is a nice tool if you're trying to compare how well am I doing so it's a nice thing to have in the tool box to just be able to compare to other results or, you know, you want to know how well am I doing with my simulated annealing tool, if you have a small enough search space you can use that to compare against it constrained optimization so those are all unconstrained in that you don't specify a set of bounds you just turn the method loose on the function giving it an initial guess and then it can run off wherever it wants to in the search space the fmin, tnc and bfgs are also I think similar in style to the cobala routines there are also some root finding methods fsolve and then there are a lot of line searching I guess, yeah, I think these are all line searching methods the brintq, brinth, ridder all of these methods are available so the idea here is not to say this is the best optimizer but to provide you with a whole lot of different optimizers that you can plug in and with a lot of these, you know, these 1d optimizers here, brint and golden and bracket these are actually kernels that can be used if you want to build your own optimization tool you can use these 1d line search algorithms as a kernel to those so those are all available here's a quick example of minimizing a vessel function on a range so here we've created x values from 2 to 7.1, stepping by 0.1 we've created the first-order vessel function here calling j1 we plotted it and then here's the minimization we just handed in the function as the very first value we hand in the limits 4 and 7 and then it hands us back a result and now we want to plot it so we just calculate we hand the x back in to find out what the actual value was and then we plot xmin versus j1min and indeed it did find the minimum there of course if you had multiple cycles in this, right it's going to find one of those and it totally depends upon the starting point as to which one you're going to end up with alright so solving a nonlinear set of equations here's an example using I think fsolve, yeah so imagine you have some hairy set of equations like this that define the problem you have to have done by tomorrow for homework or whatever it is and it takes multiple values, right first of all we have a set of coefficients a, b, and c that maybe are going to change from problem set to problem set, but you want to have that they're not optimization parameters, they're just something that are going to be passed in as constants and then we have multiple values x0, x1, and x2, so this is really our vector of inputs that we have and they're scattered all throughout these equations and these are all very nonlinear in their behavior so how do we do this, well the first thing that we can do is we can create a function called nonlin or whatever you want to call it and it's going to take a vector x which are the x0, x1, and x2 values and then it also allows you to pass in these coefficients a, b, and c into the function to call this set of equations, we've written these equations here if you look at this the first one, 3 times x0 minus cosine x1 times x2 plus a here, well that corresponds to this function, right same thing, we have the next entry into the list that we were returning as x0 times x0 minus 81, this is really just this equation coming across here the final value that we have down here is exp negative x0, x1 plus 20x2, so we've just unpacked the x0, x1, x2 so we could write our equations exactly how we did over here and we passed them, we made these calculations and then we're going to return that as a list in this case, you can return it as an array as well, and so we've done that, now we've defined our a, b, and c values, and now what we've done is we're going to pass in this non-linear, now we want to solve, what are the roots here, and so we have we pass in our non-linear function, and we start passing an initial starting point and then look what we've done here, their function we can't just solve it, I mean you think about what a solver is doing under the covers, it has some algorithm that it's really just calling out to your function, and then when it gets back values it has to come up with the next guess, right, and it's going to call your function again, so it basically, it has to know the interface of how to call your function, well the way that these optimizers do it, this is fairly typical you can pass in as the first argument, the first argument is the vector, that's the guess vector that they're making and they're expecting to get returned to them the result from that guess but in our equation it's not quite that simple we don't need to just pass in a vector, we have these other arguments, a, b, and c that have to be passed in every time, so they provide ability to do that by specifying this args list, a, b, c, so now whenever the optimizer calls out to your function, it gives it the x, but it's also going to give it all of these extra args passed in to the function, so that's what we're doing here, we call this, we get back our root and then we'll print out the root and then call our non-linear equation again, handing in the root, and we're going to hope that those are all quite close to zero, and in this case they are, obviously they're never going to be I mean it's not always going to be exactly zero but there are a lot of, you know this may, you may not care to get it, this is even close to zero, especially in this case this function's fairly easy to call, but what if you're calling, if you're trying to solve something, and when you, every call that you make, you're calling out to fluent, or you're calling out to you know, and solve electromagnetic solver, whatever it might be you have these very large algorithms that might take days to run or hours anyways, and so you don't want to have this just iterate down to finding the, you don't want it to run forever, it matters if it runs twenty times or a hundred times, you'd rather it only run twenty, so there are going to be all kinds of parameters here, if we do from sci-pi import optimize optimize dot f solve well, I just picked one that doesn't have that this function actually allows you to pass in what is the relative error that you're willing to take or the absolute error that you're willing to take what are the number of maximum iterations you're willing for this function to search before you give up, basically what are the maximum number of function calls you're willing to take, because some of these things will call actually your function twice every function call to calculate a derivative to find the next point, so here's another example and this just points out the fact that some of these algorithms, whenever you're doing an optimization with a gradient method one of the critical things is what is the derivative in the area of the search space that you're in, right, because you need to know what the curvature is, you want to be heading downhill usually or uphill, as the case may be on the surface that you're working on so here we're calling f min and I'm using a method here to see how long things run we don't print out the time, or I guess we do we're printing the stats here on this we have a start time and a stop time we're calling optimize.f min handing in the rows and function that we have given it this initial guess that we specified here and told it to run with an average tolerance of whenever you get to the the tolerance of one e to the seventh and stop, so here we go, we run it and here's what gets printed out looks like it ran for 316 iterations did 533 function evaluations and it solved it in 0.08 seconds so pretty quick, looks like its solution was pretty good, now in this case what we've done is the rows and function the optimized, notice that the rows and function is shipped with the optimized library we ship lina with the sci-pi library because it's so commonly used some of these torture test functions are shipped with the library so that you can use them when you're testing your stuff, so the rows and function is shipped with it and there's also a derivative of the rows and function which is a closed form solution to this or a better approximate, it's a function that can be called instead of doing the numerical derivative come on pointer, alright, so now if we call this function, the f-min-b-f-g-s allows us to hand in what is this derivative function, now when we call it it's done a lot fewer iterations 116, it's done about half the function calls here, but you've also I mean it's done a lot of gradient evaluations, so if this is an expensive operation, you really have to add these together to compare it to this over here still it's quite a bit less, and it found it much more quickly, so your mileage is going to vary on this, right, how much benefit do you get it all depends upon what the cost of calling your function is the higher the cost of calling your function the more payoff you're going to get if you can provide a better derivative algorithm, then just allowing the optimizer to do the numerical derivative