 1 up to what I'm going to think of as the nth entry, which is indexed by n. But that's really the n plus first entry in the Python array. And then I'm going to go in my loop. It's going to go from p. It's going to go from 2 to floor, square root of n. I use the range here because I think that's, and again, I have to put the plus 1 here because the range function is a half open interval. So it goes from the start, but not all the way to the end. It goes up to the last integer before the end. This is sort of natural if you think about it, the motivation I think is for using half open intervals is if I then wanted to do another iteration that began where the last one left off, I would have the start be exactly the end of the one before it, whereas if I'm using closed intervals, I would have to add one. But I'll just note that I could have also written this using the dot, dot, dot notation. So I could have instead written this. And if I used this notation with an actual bracket, I don't need to put a plus 1. But I wanted to show you the range so that you knew it was there, and also because I think it might be more familiar to people actually. And then, so in Python, I could have used a continue statement here if not sp, but I decided to go the other way. If sp is true, then for m going from p plus p up to n, but of course here it's n plus 1. And here I really do want to use the range. It's going to iterate by step size p. And then I'm going to set each element of the array to false. And then I'm going to return using a list comprehension, just as we saw in magma, for x going running from 2 up to n, which means range to n plus 1, return x if the x element of s is set to true. So let's go ahead and try this all as well. And let's go ahead and time this one. And oh, says she's pretty well better than magma or GP, but still 20 or 30 times slower than Julia. OK. Any questions on this? Yeah? In every language, you write p plus p in your mind. Do you know if there's a difference in right to the right to p? Absolutely no difference. Yeah, I could have written 2 times p. I wrote p plus p just to be pedantic to indicate that we're only ever using addition. There's no multiplication going on. But in fact, multiplication by 2 is very fast in any computer. It could just all of these systems will know enough to just turn multiplication into a bit shift. And doing one multiplication wouldn't change the complexity of anything anyway. This is just my own personal weirdness, I guess. I guess I felt it was maybe more consistent with the idea that we're always stepping by p, so we should step by p the first time, too. It is worth noting, you will sometimes see, yeah, this is a good point, you will sometimes see people write the SIV of Aristosthanes rather than doing a step by p to setting m times p for m up to n over p. That is a bad SIV. That has a worse asymptotic complexity than the one we've just implemented. But you will find that SIV implemented on Stack Overflow. OK, question? I believe that state will sometimes represent the square root of n and actually represent a fairly complicated object from a lot of information they give to it, rather than representing the decision. Do these other languages, when they do a square root of n, do they represent a lot of information or do they represent a lot of objects? Yeah, excellent question. So first of all, for those of you unfamiliar with SAGE, you can find it kind of weird when, say, you want to evaluate your very, very favorite polynomial at something like pi or the square root of 3. It will tell you sort of a pedantically correct, but not what you wanted to answer, which is your polynomial with the symbol pi or square root of 3 inserted everywhere x was, when you actually wanted to know what the numeric value was. And in some sense, it's doing you a favor. It's saying, I don't want to. Why should I replace your mathematical square root of whatever with a floating point approximation? I mean, I've sort of done some mathematical damage if I do that. And so in SAGE, if you actually want to treat the square root of whatever as an approximation to a real number, you have to make that explicit. I don't need to do that here because of the floor function. So the floor function is returning an actual floating point number by definition. It's happy to take as input sort of a symbolic square root. And it will say, oh, no, you're not a symbolic square root anymore. I'm going to cast you to a real number and then take your floor. Another question? I have a question about the square root. Very large square root. Yeah, so that is an excellent question and something to be cautious about in general. In the case of square root, it's actually OK. But an arguably better approach would be for me to have done round or to implement a function or added a half to my square root. It wouldn't have broken the civital and it would have been sort of guaranteed rock solid correct. And so it's not uncommon to see operations that are implemented where you're obtaining a floating point answer that you know should be an integer but you're not sure the computer does. If your integer is 2, you really don't want it to decide it's 1.99999 and then floor it down to 1. And so it's quite often the case that it's safer to add like a half or a fudge factor to make sure that doesn't happen. In the case of the square root function, especially in Sage, it actually knows n as an integer and it will try to take the integer square root first before doing anything weird to it. But yeah, I'm grateful for that question. If I had been more careful in writing this code, I probably would not have written it that way. That's a good point. Another question? Excellent question. How does this compare with the C code? So let's go look at the C code. And again, this is not the optimal C implementation of the SIV. If you want the optimal C implementation of the SIV, go to Kim Walsik's prime SIV GitHub repo and look at the code there. That is absolutely no argument the fastest implementation of prime SIVing that I know of on the planet. In fact, it's so fast that a few, a year or two ago, I ripped all the prime enumeration code out of the Small Jacks library that I wrote, which is written in C, which actually had a lot of code in it for enumerating primes very quickly because it couldn't compete with Walsik's code, which is just crazy fast. So I recommend that. This is just a completely naive, blind implementation of exactly the algorithm we've implemented in the other four languages. So it's allocating an array. I'm using this as a bitmap. So in C, C doesn't, well, C++, I could do something differently, but I'm just doing a mindless C. So my bitmap is actually an array of 64 bit integers. So I have to convert things into bits. So a lot of the, if you're not familiar with C, don't worry about fees, but these are just bit manipulations that are turning this array of 64 bit integers and making it look like an array of bits. So if I wanted to find the nth bit, I have to divide n by 64, look at that entry in the array and then shift to create a bit mask that sets exactly just the right bit, mod 64. That's all this code is doing. But otherwise, the algorithm is exactly the same for to going from P up to four square root of n. If this, if this is test, this complicated look expression is just testing, is the Pth bit, excuse me, set or not? And if it is looping over m going from P plus P up to n and checking if the nth bit or actually setting, excuse me, un-setting the nth bit using a mask of it appropriately shifted one bit. Anyway, this class is not meant to teach you anything about coding C, so I don't look at this code if it's unfamiliar to you, but if you know one thing about C, you probably know that it's supposed to be fast. So let's go and see how fast it is. So I'm gonna do, where do I need to go? Okay, so I'll go up to 10 to the seventh, which is the same one I did in the others. Yeah? How do I do that? Thank you. And what do you know? 0.45882 seconds, which is suspiciously close to the time it took Julia, okay? And so that's something that you should find impressive. I mean, this is what, it was exactly this exercise that made me decide I really do include Oscar in this presentation, even though as I said before, I don't think Oscar is quite as very far from being fully realized and has a lot of functionality that still needs to be added to it, but this example highlights the potential. I mean, of course, you can always go and write C code when you want your code to be fast, but the advantage of Julia is that writing Julia doesn't feel like you're writing low-level code. It feels much more like you're writing Magma or Python code, and yet you have the ability to get C-like performance out of it with a bit of care. When things get complicated, there are issues that arise in that you have to be careful that you're not causing Julia to do more work than it needs to do, if it has to spend a lot of time figuring out how to manage types. Julia does what's called type dispatching, and that can be slow, but if you're careful, or if your program's just really simple as it is in this case, there's really no opportunity for anything complicated to go wrong, and it will just generate what's sort of virtual machine code that is essentially very close to as good as a C compiler would spit out. And not uncommon to get a factor of, easily a factor of 10, and not uncommon to get a factor of 100 or 1,000 speed up. Question? I haven't tried that, but that's a good experiment. So the question was about using GP2 to C, and that would be a good test. And my guess is for this simple algorithm, my guess is that the performance would be comparable. I would be surprised if it was slower. I would expect it to be comparable to a C program. When the code, but I think when things get more complicated, I'd have less confidence. I mean, the advantage I think Julia would have over GP is that Julia is sort of designed from the bottom up for this kind of optimization, whereas GP2C was sort of designed to a layer on top of a scripting language after the fact. Similarly, I should also mention, there are also tools that allow you to turn Python code into C, and I should also probably do an experiment where I time those as well. But I would tend to expect the same kind of performance I would get with GP2 to C, which on the simple example, might be identical to what we just saw here with the CIV, but on a more complicated example, if I had to place a bet on the horse, I might go with Julia rather than writing Python or GP code and converting that code to C. And the beauty is with Julia, I don't even have to go to any effort to convert to C. It happens essentially automatically. All right, I think we're out of time, so we better stop there. And I look forward to the problem, oh wait, Edgar wanted me to run an experiment. Everybody who has their notebook server up, on the count of three, I'd like you to all create a new kernel and compute one plus one equals two in it, okay? As many people as possible, that would be good. We wanna see if this brings the server down. One plus one, or whatever function you want, just compute something. I just want you to launch a kernel, your favorite kernel, then type into the cell and compute one plus one. One, two, three, go. I'll do it too. I'm gonna do mine and Python, and I'm gonna type one plus one. If I can type. Oh, mine worked, and you guys were ahead of me, so that's a good sign. Okay, I'm feeling much better about the problem session this afternoon. All right, see you at four, or 4.30, sorry, yeah. Thank you. Thank you.