 who will speak about Scythin. I thought it's C-Python, but I just thought it's not. So I'm really curious to find out what it actually is. OK, thank you. OK, good evening, everyone. I'm going to present Scythin to you all and explain how you can use Scythin, which is not C-Python, to bring performance improvements to typical Python code. So first, a little bit about myself. So the talk today is out of work I did this summer for three months with the Institute for Artificial Intelligence, where I optimized an open-source Markov Logic Networks library using Scythin to bring about performance improvements. And over three months, we were able to speed up the code by around 30%. So I'm here to share those experiences and give you tips that I learned that would have been helpful had they been given to me at the start of my project. So that's sort of the background where I'm coming from. So that being said, there's a little bit of a disclaimer. So this talk is not a thorough outline of Scythin capabilities or a completely exhaustive list of the best practices and techniques that you should use while using Scythin. It's more of an introduction. And I'll try to give some personal insights that I found useful and try to explain why I think it would be useful for everyone to try using Scythin in their programs. So let's start with a demonstration. I hope everyone knows what the Fibonacci sequence is. Is there anyone at all who doesn't know this? OK, great. So generating the n Fibonacci number is a standard O-n problem. And generating the first 10 Fibonacci numbers is also an O-n problem. However, because I want to demonstrate speeding up code that is generally slow, I'm going to implement generating n Fibonacci numbers with an O-n square algorithm just to show how even slow algorithms can be sped up using Scythin. So I'll show you using a Jupyter notebook. This is my Fibonacci function. Everyone can see this, right? It's a recursive implementation, typical function. And I'm timing it and trying to figure out with what the 34th Fibonacci number is. So let's go ahead and run this. So in this particular run, it turns out that finding this number took 2.8 seconds. Now I'm going to show how you can use Scythin to not change your algorithm and yet bring about a speed increase in your code. So the way to use Scythin in a Jupyter notebook is like this. First, you need to load the extension. Then you need to tell Jupyter that this particular cell, the code in this particular cell needs to be run using Scythin. And now I'm going to make the actual changes to the code. And you'll see that I'll change only one line. I'll make three edits. And the code will become an order of, the speed will increase by an order of magnitude. Scythin requires you to define functions using CPDEF instead of DEF. So that's the first difference. It allows you to declare a return type for each function. So when it allows you, that means that you can declare a return type, but you don't have to. So in this case, because I want to demonstrate speed increases, I'm going to do it because it's possible to do. So I'm going to say that the return type is an integer. And similarly, I'm also going to say that the parameter is a variable that is of type integer. So these are the only changes that I'm going to make, one single line. And now we'll see whether this brings any performance improvements to our code. So I'll run the same test again, finding the 34th Fibonacci number. So you can clearly see that Scythin wins over traditional Python. So with this background, I hope that you're interested in what I have to say, and hopefully I'll be able to bring about similar speed increases to other larger code bases. So before we start, as a high level view of a programming language, especially for the purposes of this particular talk, I'm going to think of a programming language as a contract, a contract that between the user, the programmer, and the computer. So when you write any code in any particular programming language, essentially you're writing instructions that the computer knows how to follow. So you follow particular syntax defined by Python, and the computer knows how to understand that syntax and perform the actions that you intend. So in a very broad sense, you can look at any particular programming language as one particular kind of contract between a human and a computer. Now the implementation of this particular contract can vary. You can have a language that's compiled, you can have a language that's interpreted, you can have like, I don't know if people were here for the last meetup, we had a very interesting talk about static typing and dynamic typing in programming languages. So these are the sort of variations that make computers behave differently with different programming languages. However, I want to argue that when a human is using a programming language, when you're writing code, what matters is not so much whether the language is compiled or interpreted from the point of view of you writing the code, but what matters more is whether the syntax is easy to use, how much experience you have with the language, and other things that affect the usability of the language more than the actual technical capabilities of that language. So for example, if a language has a lot of support online on Stack Overflow, people are more likely to use it even if it may not be the best particular implementation out there. So that being said, let's look at some questions being asked on Stack Overflow just as a sort of crude metric to see language popularity. So you can see that, I don't know if it's visible, the blue one is Python, and this green one at the bottom is C. So there's a very clear distinction. People seem to be more interested in Python. So that's, and I mean, everyone who's here, this is the Python user group, you're here because you like using Python, right? So there's a particular reason why people prefer Python, and that in my opinion is the simplicity that which the language brings to your code. Now using Scython, we'll see how you can compromise a little bit on that simplicity and gain efficiency that can rival C, which you might not want to use for reasons apparent from this image. So these are the broad distinctions in my mind between Python and C. So the reason C is faster, so to speak, is because it allows lower level control over computer hardware. You can have manual memory allocation and so on, whereas Python does things under the hood without letting you bother about them so that you can focus on solving your particular problem, which is what you would like to do. Now in 2018, it turns out that programming is now not just a contract between a human and a computer, like a programming language would define, but also between a human and another human. And what I mean by this is what I am given to understand corporations implement using code review and other quality control features. So when you write code, you want the computer to be able to execute that code. Yes, that's the first purpose, but a second and increasingly more important purpose is for other people to be able to read your code, to be able to maintain it. That's why you need to document your programs well so that other developers can come in and use them later. So this is the second part of this contract, which is increasingly important and which we do not want to compromise upon. So that is what makes Python particularly meritorious. So Scythian fits in quite deftly between C and Python. It provides relatively simple syntax and high speed. Let's see how this is achieved. So like I mentioned, the programming language is a contract. So C Python is one particular implementation of this contract, which is defined by the Python language. Now Scythian is a separate implementation of the same contract. So any code that is Python code is also Scythian code. But the way the computer implements this particular contract is different. Scythian, the contract that the user has between them and the computer allows for static typing, which is not allowed in C Python, which is the reference implementation of the Python contract. So in that sense, Scythian is a superset of the Python programming language. Now, it's supposed to be a superset, but we'll see at what point this distinction fails. So this is another difference between Python and Scythian. Generally, Python code is just interpreted directly. In a Jupyter notebook, you can't really see the steps under the hood that happen when I use Scythian magic, but I'll demonstrate that. When you're actually using Scythian, the same Python code, you need to compile first. There's an extra step. So this is a slight inconvenience, you may say, but like you saw the speed gain at runtime is enormous. So you have Python code, you compile it, Scythian generates C for you, and then you can run that like you would run any other Python program. Now, the reason for that aspect over there is because like I mentioned, Scythian is a superset of Python. So you can compile any Python code, but you can also compile code that the Python interpreter would reject. So when I define my function with CP def, the Python interpreter would complain because it doesn't understand CP def, but Scythian does. So you can compile any Python code with these slight modifications and generate Scythian C. So let's see how that happens. Okay, so I'm going to use the same example. So I have the same Fibonacci function over here in this file. I hope everyone can see this. Is the font too small? Can anyone not see this? All right, great. So I have the same Fibonacci function. And like I mentioned, there's an extra compilation step. To compile a Scythian code, first you need to define a setup.py file, and this is how you define that. This is essentially a directive to Scythian to convert every single PyX file in the current working directory into Scythian code. So if you notice, when you convert Python code to Scythian, you also change the extension. It's no longer a .py file. It's now a .py x file. Before I do that, let's first run the same Python code and it should take around two seconds to run again. Let's see that. I'm running test.py, which uses fib.py. Test.py is the same test that I was running before. It's printing, it's finding the 34th Fibonacci number. So this took three seconds. Now I'm going to convert the Fibonacci to a Scythian function. So I'll say cpdef like before. I'll define the return type as an integer. And I'll say that the parameter is also an integer. Now I'll rename the file so that it's a pyx file. And now we can compile this. So the compilation is a, Scythian is still not that mature. So the current best way to compile, to run the compilation is this. You say python3 setup.py build underscore ext hyphen hyphen in place. And this compiles your Scythian file. So this will compile fib.pyx and Scythian will generate the c equivalent of my Python code, a lot of output. But the point is that this has generated this .so file that you can see over here. So this is a file that Scythian has generated for me, which I can use from my Python programs and will actually execute this code that I wrote in the same speed that you would typically expect from the C programming language. So let's go ahead and see this happening. I'll run test.py again. And you can see this took only .04 seconds. So there's a tremendous speed gain to be obtained if you manage to use Scythian correctly. So now when I mentioned that Scythian is a superset of Python, what are the differences between Scythian and Python? So whenever you have a variable that is an integer, that you know is an integer, and it's going to stay an integer throughout the duration of its life, then you can type it as an integer just by saying int n, like I did for the parameter and the return type. You can do the same thing for characters, floats. You can have double precision floats, the usual. You can also type more complicated Python objects like lists and dictionaries. That's an example of a dict. That's a dictionary. So Scythian will now use struct access and it'll be able to access the elements of the dictionary in genuinely in O one time rather than amortized O one time. You redefine functions instead of using def. Now you can, there's another kind of definition that Scythian allows, that is C def instead of Cp def. And the difference between Cp def and C def is rather subtle. So what happens technically when you define a function as Cp def is that Scythian creates a version of it that's in C and a version of it that's in Python that's calling the C function under the home. And any Python code that you have first calls Scythian's Python version of your function, which in turn calls the Scythian's C version of your function. So that's how you get your speed game. The reason there's this intermediate Python function is because your plain Python code needs to access this C code somehow and this is Scythian's way of allowing that. Now when you define a function as C def, you're saying that you don't care about this function in the middle. You only want a C version. Now if you only have a C version, you're right, you can't use this from typical Python code. But if you can't use it, then why would you create it? You would create it if you call it from other C code and Scythian is basically C code. So if you have another function written completely in Scythian, then you can call a C def function from this Scythian function. Now this is rather complicated. Sorry? Yes, Scythian takes care of the headers. There's no... So the reason I like Scythian is because when I learned C programming in college, for instance, I had to write my header files myself and I would invariably make mistakes. But Scythian takes care of that and sort of makes things easier. Now the difference between C def and Cp def, I already explained, but in terms of run times, it's a very marginal difference, which is why in the example, I didn't even bother showing C def. Cp def code ends up running and almost the same time as C def code. And this is rather obvious if you think about it. The Python overhead is gonna happen only once. The first time your Python code calls your C code. At one point there is an intermediate function being used by Scythian. But after that, in the entire body of the function, it's still the C code running. So you get pretty much the same speed gains you would get with C def. Now, Scythian also allows other Python constructs to be typed. So for example, classes can be defined as C def classes. This is slightly complicated. You need to create an extra file, which is similar to the headers that we just talked about. I'll explain this with an example soon. Now, Scythian also allows you to temporarily disable the global interpreter lock. Now, I have personally not used this feature, but for those programs that extensively use multi-threading, this can presumably help increase speed even further. So the concepts behind Scythian speed increases are all related to static typing. There's a question mark for the last two because I have not used those personally, but apparently the static typing that Scythian allows for is the reason that interfacing with C structs and using NoGill brings improvements to Python code anyways. The point is that static typing is able to bring these improvements because every time your code runs, the Python interpreter no longer has to worry about the types of the variables. So typically when you have dynamically typed programs, each time your computer encounters say N in this case, it checks whether N is actually an integer as your code expects it to be and only then it performs the operations required. When you declare that N is always an integer, you take that guarantee upon yourself. So it's a slight shift in responsibility from what the contract defined as the responsibility of the computer to the responsibility of the human. So this slight shift is able to bring about this vast improvement in speed. Now hopefully you already are interested in Scythian and want to use it, but just in case there's more reasons required, I would like to go through them. Here I have a rough chart comparing C and Python in terms of speed and simplicity. Now the advantage about Scythian is that it doesn't fit in at any one point in this graph like C or Python would. It allows you to customize what balance between simplicity and speed you want. Now in the Fibonacci example that I showed you, I wanted to change only one line. I could have changed more lines. I could have typed more variables had they existed, but it turns out that just typing that one line brings the major portions of the speed changes that you get anyways. So in a sense, there are only some parts of a program that require being typed to be able to harness the speed gain that you can even if you rewrite the entire program in C. Now Scythian therefore fits, it fits at a spectrum across this entire graph. You can choose to type certain variables in the beginning, see whether the speed gain is appropriate and enough and if not, you can incrementally increase speed further by typing more variables. Also I would say that Scythian generates potentially higher quality C code. Now of course, if you hire a person who's an expert at C programming, they'll be able to rewrite your code better than Scythian when Scythian compiles your Python into C. That code will be more human readable and it will be much shorter. But it's also likely to be error prone in case the program is larger than Fibonacci. So if your code base is huge, then Scythian requires very little investment on the part of the human and sort of outsources the C code generation to the computer. And this finally results in faster development times. So we want code to run fast, that's true, but we also want to be able to develop code fast. So the trade off between these two historically has always skewed in favor of runtimes. Developers' times was less valuable than runtimes because you'd run code millions of times and write it only once. But that is changing slowly and as contracts are increasingly shifting to human-human versus human computer, I think that it's important to also minimize development time and Scythian. Now, so the example so far was just a single Fibonacci function, but it turns out that you can optimize large code bases using Scythian as well. Scikit-learn, SciPi, Pandas are all libraries that extensively use Scythian static compilation and there have been no errors because of this compilation in their production libraries so far. That being said, Scythian is still at version 0.3. In fact, 0.3 came out just last month. And even though Scythian is supposed to be a superset of Python, it is not yet a complete superset of the Python programming language. What that means is that there are still certain Python constructs that the Scythian compiler is unable to handle. Now, this is unfortunately not documented very well on the internet so you might find articles that say Scythian is a superset of Python so you should use Scythian. Now, I would be cautious in that case just because Scythian is supposed to be a superset of Python doesn't mean that your Python code is currently Scythian compatible. I learned this the hard way when I was working on PrackMLN at the Institute for Artificial Intelligence. One of the files that I had to optimize had nested classes which is not that uncommon in Python programs. And it turns out that the Scythian compiler as of June this year can't handle those. So that's something that it's a problem you can't solve. You have to rewrite the code so that it doesn't use nested classes. But the point is that Scythian is still in its current form 0.3, it's still Turing complete. So technically speaking it is possible to rewrite any code that you have in Scythian terms and depending on the feasibility of that particular task, I would argue that it's at least worth exploring. Now, when you are optimizing a large code base you wouldn't know, you typically wouldn't know what to type at first. Of course, one option is to type every single variable that is, but like I mentioned before that is unnecessary. So there are two ways to figure out what variables to type and those are annotation and profiling and I'll explain both these through the use of examples. So let's look at annotation first. Annotation is a Scythian feature which tells you the amount of Python interaction and you compile Scythian code snippet has. By Python interaction, I'm referring to things that cannot be optimized by Scythian. They cannot be translated directly to C. So the way to check how the annotation is working in Jupyter Notebook is to just add annotate after the Scythian magic and let's first annotate the old Python function to see why it was slow. So this was the old Python function. Now Python code is Scythian code. So Scythian should be able to handle this function and I'm going to annotate it to see which lines in this particular function are required or the reason for the function being slow. So this is the output that Scythian generates. You can see yellow lines hinted Python interaction. So what this means is that a yellow line has generated more C code than a line that is less yellow. You want to see the C code, you can just click on the line. So you can see that the def actually creates C code that is this large. Now, if you start typing variables, then Scythian can start optimizing this code and reduce its size and therefore reduce the amount of time it takes for your program to run. It's generating boost. Sorry? Pythons in plus plus plus plus. Sorry? It's generating boost pythons in. I'm sorry, what do you mean by boost pythons in? Py object and all that. Yes. It's generating boost pythons in. I'm sorry, we can talk later. So what I want to point out is if we type the parameter and we say that n is always an integer, then how does that change the annotated function that we see? Can anyone guess what differences we would observe if we declare that n is always an integer so that the computer no longer has to check for that? Any guesses, anyone? Right, line number two would change. That's a pretty decent guess because when you're checking whether n is equal to zero or n is equal to one, before doing that you need to check whether n is actually an integer, but now you don't because I'm telling you that it's an integer. So let's see if that actually happens. It does happen. That line is now completely white. That means that it's been translated directly into C code. Now, similarly we can see what the result is when we had the entire function in Cython. The entire function became C code. This is why it was fast because it was no longer running Python, it was running optimized C under the hood. Now I explained to you that CP def creates a sort of wrapper so that Python can use a function that's not written in Python. If I use C def instead of CP def, can someone guess what happens? Yeah, it generates only C code. So there is no yellow lines in the output. The entire function has been written as C now. So any Cython code you have anywhere can now call fib and you won't be running Python, you'll be running C. You'll be running C that's as optimized as it can be. So this is as fast as Python code can get. Now there are alternate implementations of the Python contract, so to speak. I'm sure people here have heard about PyPy. Now PyPy is faster than CPython, but it can't compare with this. So this is as fast as Python code can get and Cython really allows you to do that without making very major modifications to your Python code. So that's why in performance critical applications, I would encourage everybody to use Cython. So this was annotation. So if you have a large code base, how many ever lines it is, you can annotate it, see where the yellow lines lie and then identify the variables being used there and then type them. But there's another way to do this. So okay, I'll also demonstrate annotation in a standalone file just a little. So we had this standalone file fib.pyx, this file that I showed you over here. We can generate the same annotations from the terminal also. We can just say Cython minus a fib.pyx and this generates a HTML file fib.html which when you open, it shows you the same view. So you don't have to use a Jupyter notebook if you're doing annotation. You can do it on source files as well. Okay, now moving on to profiling. So first before profiling, can anyone see any problem with annotations and why they might mislead people when they're trying to identify bottlenecks in their code? It turns out that Python interaction is actually not a very reliable metric. And I actually sort of hinted towards this when I explained the difference between CDEF and CPDEF. The reason there's no appreciable speed difference between CPDEF execution and CDEF execution is because function calls typically make up a very small portion of a program's runtime. Whatever the body of the recursive function is, in this case fibonacci, that takes up more time than actually calling the function itself. So this is similar to the 80-20 rule which is that 80% of the time your computer's executing a particular program, it's executing only 20% of the code. And in the remaining 20% of the time, it's executing 80% of your code. So you actually need to optimize only 20% of your code to be able to get 80% of the potential speed gain. And the point in annotation and profiling is to try to identify where that 20% lies. Now, because annotation can't really tell you how much time is being spent in a particular code snippet, you might say that it's better to use annotation but also think about it yourself. So in my case, I wrote the fibonacci function. I know that as soon as I type n, the variable n, that is going to bring speed increases because that's the only variable I ever test for. I test if it's equal to zero or if it's equal to one. But it turns out that people don't recommend this approach. The second quote over here is actually right from Scythons website. They say you should never optimize without profiling your code. And then they actually write, let me repeat this, never optimize without profiling. So it turns out that even people who write code for years and years actually don't have a very great idea about which parts of their code are actually running for long and which parts are not. So but luckily there's automated ways to solve this problem and there we go into profiling. So has anyone ever profiled Python code using Cprofile which is a built-in profiler that comes with Cpython? All right, great. So Scythons allows you to continue to use the same profiler. So the only changes you need to make is add a directive to the compiler to indicate that your code is going to be profiled. So profiling information doesn't get lost. And I'm going to show you how to do that. So I had this fib.pyx. I'm going to open a copy of the same file that has this compiler directive added. And this is how you add the compiler directive. You just add one line at the top telling the Scython compiler that you're going to be profiling this code. So this was, this is with profiling. This is without profiling. There's only a, oops, there's only a single line difference at the end at the start. So this has to be the first line in your Scython file. It can't be the second line as of now. And once you add this, the Scython compiler will start profiling your code. Now you can profile using Cprofile. I'm running out of time. So I'm not going to explain how Cprofile works. That's the same as it works for Python. You can use it the same way for Scython. Rather, I'm going to show you some profile codes that I encountered when I was working with Scython. So to show you what a large code base looks like when it's profiled. So this is a visualizer called SnakeWiz. It visualizes the profiles that are generated once you use Cprofile. Now this view basically shows the stack trace. This is the first function that was called second, third, fourth, and so on. And for each function, I have the amount of time that was spent in that function. So when I look at this, I immediately know which functions are the ones that most of my execution happens in. And I know that these are the functions that I need to speed up. So for my particular project, it turned out that whenever I was doing exact inference on a Markov logic network, that was taking time. So it was taking 203 seconds. Now, after profiling this, I typed the relevant variables inside exact.py. And you can see the difference in the sites inversion after when the site. And it's taking 146 seconds to run instead of 213. Now, why is the difference not as appreciable as with Fibonacci? There's many reasons. This is a larger code base. So profiling it and optimizing it is more complicated. But the single biggest reason is that when I did this, I didn't know anything about Scythe. But now I do. So when I show you Fibonacci, I know exactly what to do to actually speed it up. When I did this, I was still learning about Scythe. And therefore it seems that my optimizations aren't as good as optimization can be. However, it still makes a significant difference. So I would urge everybody to at least try this. Now, this is the last bit of the presentation. This is about getting help on Scythe. I mentioned that there is no real documentation online about how Scythe is not a complete superset yet and so on. It turns out generally help for Scythe online is pretty scarce. So if you're anything like me, then you need Stack Overflow or you can't do anything. So whenever Stack Overflow goes down, you take sick leave and you don't go to work. Unfortunately, if that's how you operate, Scythe doesn't really work for you because there is nothing on Stack Overflow. Here's the same graph that I showed you before with Scythe questions. You can see there's hardly any. So it's very difficult to get help, you would think. But it turns out there's a very helpful mailing list. You ask any questions you have, they get back to you within a day and whatever problems you have at whatever level of proficiency you are, they are very helpful. So there is really no reason left to not use Scythe. You can use these links to see the work I did about Scythe and there were some tips that I didn't have time to go into today because it was only 30 minutes. And if you have any further questions, please contact me on LinkedIn and I'll be happy to help you out. Thank you. Questions just right now and then maybe after that we will get some more food to a short break before the next week. So do we have any questions here? Can you show me the demo of how you do a return type of a dictionary? How I do a return type of a dictionary? Yeah, sure. Sure, yeah, I can do that. So let's say instead of returning the result, I was gonna return a dictionary that had, I was gonna return a tuple that had zero comma result. Is this your question? All right. Yeah, that's right. I need to change the whole function. That's okay. So this does the same thing. This dictionary is, you can even add an audience. So people generally don't do the audience when they use Python. But you can see import Python's implementation of a single one-dimensional array and use that and it's much faster than you use it. Oh, that can be erased and did a pretty good job. Cycle is known for having very good compatibility with NumPy. So if you're a head to NumPy arrays two-dimensional array, then Cycle can actually go so far as to remove the overhead between Python code calling NumPy, which is basically C. So Cycle can actually directly call to the C from the C code that it generates. So NumPy and Cycle makes very well and that's very well-destroyed. That's how it works. Multi-processing and trading. Right, so multi-processing is actually not generally multi-trading to workers. But, and Python, the C-byton implementation is not great for multi-trading. However, Cycle allows you to get rid of the new, that's number seven. So the way to do this is to just, I've never done this myself, but the syntax to do this is to just save it. This is no-gill. And now whatever code you like, it doesn't, the global interpreter on with NumPy. So Cycle is supposed to have great multi-trading support using no-gill. But that being said, I have personally not used it, so I can't bounce for it. So what about the user to find data, user to find objects in their textbooks? Right, sorry, I was going to show you that I ran out of time. So I'm going to show you the examples from the worker I wrote over this summer. So let's look at an example file. So this is a user defined data type that is supposed to represent logical predicates, okay? And it has two attributes. It has the name of the app of the predicate, which is a string, and it has a list. Now this is a .pyx file that I've defined in class with C-dev, like I mentioned. But this alone is not enough. You also need to create what C-users will know as the header file. So this is how the header file works. You just define the types of the attributes. Now this only works when the attributes are constant. You can't keep adding and you can't add the attribute to the predicate timing. So in that sense, I think put some restrictions on what you can do. But like I mentioned, you have to give away a little bit of simplicity to be able to get that speed. So overall I think it's a trade-off that's worth making. I can show you other examples of this. Here's another type, customer-customated type with a bunch of attributes. And here's how you create the .pyx file corresponding to .pyx file. This public keyword with the header is similar to the difference between C-dev and C-dev. It's to allow Python to access this from code that is not being optimized by Python. So the MRF variable can be accessed by both Python code and by Python code C code. But the MLN variable can only be accessed by C code. The reason I've done it like this is because over the course of three months, I was able to make sure that all access to the MRF variable happened through Cython Optimized Code. I didn't reach the same stage for the MRF variable, so it's still currently public over here. So that legacy Python code can still access it. However, eventually when you finish optimizing it, then get to a stage that's better than this. And you'll have a version where you won't need public code. Yeah. I'm just curious if you sacrifice the code but without any annotations, what is the runtime for you to launch your example? So, annotation doesn't have any effect on type annotations. Right. Okay. Yeah, that's a great question. We can try that and see what happens. So, in a sense, because all Python code is Cython code, we can run Python code as a bit of a cycle. So, let's see whether that brings any speed and performance. Now, this varies very greatly between code snippets. Now, I'm not very sure what happened in this case, but what I can tell you is just because it works here, doesn't mean it will work at me. Or just because it doesn't work right now, doesn't mean it won't work in your user. So, it's still pretty good, much, much better than Python. But this is not something that you can apply in the world. It varies very greatly. Okay, I will say thank you very much. Thanks, everyone. Thank you.