 Hello, we're back. So, um, yeah, what's next? So we've got a little bit of time. You could almost call it an intermission from the other kinds of stuff we do because this will be the next 40 minutes will be quite different than the other things we've done. Some people might call it boring even, but there is a point here. The point is that we will, yeah, like the point is to sort of see beyond the things we teach you to this bigger picture of stuff. It's pointing out things you can do, but not really going much further than that. Yeah. Okay, so let's switch to Yarno's screen. So, okay, sci-pi. So if you look at the objectives here, it basically says what we'll want to do. So we aren't going to teach you everything about sci-pi, but we're going to have a brief demo of something that you'll be doing often. So you need some function and you study what has it and then you read the documentation and then implement it. But what is sci-pi itself? It's basically a big set of libraries, so it uses the NumPy array interface to hold the data and pass it back and forth. Can you scroll down some to what is sci-pi? Yeah. And it wraps a bunch of other numerical routines that are written in other languages. So what are they called? Things like Netlib and things like that. Anyway, for decades, scientists have been writing code and Fortran and C, and they basically collected all the best implementations of everyone for a while. And then they made a wrapper in it in Python. So a typical way you would do this is you would make a NumPy array. You would call a sci-pi function passing it that NumPy array. It would use the NumPy memory directly, do whatever functions, and then return either a value or another array and things like that. This is a pattern we'll see very often. So yeah, I mean, I don't know, do you think there's much else to say? Is that a good description? Yeah, there's a huge amount of well-optimized and sometimes useful functions in there. But really, we can't go through everything. So it's probably just best to go to one example. Yeah. So I've used it a few times in my life. Really, my kind of work didn't use it that much. But when you need it, it's there. So we've got a few exercises here. Which are really completely independent for you. So I propose we give you 15 minutes and you can go and try either one or two, or maybe both. And here, the main point isn't the output, but the seeing the way you do it. And if you don't want to follow along and try these functions, you could find something else in sci-pi that is relevant to you. Do that. We did documentation, take a break, whatever. And I guess we'll give you 15 minutes or 10 minutes, maybe 15 minutes is better. OK. Yeah. And I guess we can be discussing in HackMD during this. So, yeah, I mean, that's really basically all there is to say. There's a few fairly typical examples here. OK, well, good luck. We'll see you back at thirty three, maybe thirty two. Let's say thirty two. OK, bye. Hello, we're back. OK, so what did we think about the exercise? There was this little poll here if you want to answer. So is the exercise worthwhile or are you doing something else? Yeah, so I'm glad some people think it's worthwhile. But the main point. Oh, go ahead. But at least one person thinks that the time is useful anyway. Yeah. Yeah. So I mean, I think there's some interesting little lessons here. So there was this good question about the dense sparse matrices, which I guess if you don't haven't used sparse matrices, it wouldn't make much sense. But the point is that num or sci pi uses the same num pi interface as much as possible to do more than num pi does. So sparse, I mean, if you need sparse matrices, you basically know that you need them. So essentially, sparse matrices are usually big matrices when most of the entries are zero. Yeah. And if you have that special case, you can do things a lot faster than if you just fill the entire array of two times two array with zeros. Yeah. Should we do an example or should we go on? I'm thinking maybe we should go on like there's solutions here. Yeah. We can talk in HackMD. Oh, yeah. OK, I've shared your screen now. There is a one point in HackMD that in this case in sci pi, you often have to import a sort of a sub module from sci pi. So from sci pi import something. And that is just a choice made by the developers of sci pi. So you can write a Python library either way. Can you say, do you know if there is a bigger philosophical reason for this? I'm not aware of anything about that. I'm sure someone has a bigger philosophical reason, but I don't really, I guess. So like whatever is most clear in your case, if the name sparse makes sense, then I would do that. I did shorter. So I guess what they are pointing out in HackMD is that they notice that this does not work. So sci pi, oh, sorry. This doesn't work. So if you just import sci pi and then try to run sci pi.sparse. That doesn't work. Yeah. And the reason for that is, oh, but it was already imported earlier. You need to restart the kernel. OK. And the reason for this is that if sci pi imported all of its functions when you did only import sci pi, then it would take too long to import and not be worth it. So this still does work. You're not wrong about this. So I actually noticed this when I tried to import the other one, integrate. Yeah. So I guess some things are automatically imported and some are not. Yeah. OK, this one doesn't work. Yeah. OK. Sorry. I'm sorry about this. Let's move on. Yeah. So should we move on to the library ecosystem?