 My name is Viral Shah and I think I recognize many of the faces from last year, but I also see many new faces and my colleague Shashi Gouda who just walked out is going to share this presentation with me. I'll go through the first half introducing Julia and how many of you guys already have heard me speak about Julia, not just how many have heard of Julia before, how's that? And how many of you have heard last year's talk? Okay, how many of you are using Julia? One? Okay, that's not good. So that change is starting right now. All right, so if you are on your wireless here, what you should do is log into JuliaBox.com. That is our cloud hosted Julia. You could alternatively just go to JuliaLang.org and download it. Yes. Right, so if you can just pass these pendries around, but please return them back. These things are hard to find when you need them. And it's also, you know, for your reference, it's also online. If you go to JuliaLang.org. So this is the Julia website. This is the downloads page and there's a whole matrix of downloads out here. If you log into JuliaBox, you will get something that looks like this. So you'll get, you know, a bunch of Jupyter notebooks. Yeah, these are my notebooks. So if you're logging in for the first time, it'll be empty. And for much of this talk, we'll actually focus on this, the, you know, thing called the tutorials. If you log in for the first time, you will see a tutorial in JuliaBox, which has a bunch of notebooks that Shashi will be mostly walking you through right after me. So the way we're going to structure this is I will give a little bit of an introduction to Julia and, you know, talk about some of the language features. We just released Julia 0.5, which is a huge step in functional programming for Julia, so I will talk a little bit about that. And then we'll get into the hands-on mode with Shashi. So the basic idea is that while I speak away about, you know, an introduction to Julia, you guys will, you know, install Julia from the pen drive that's going around or log into JuliaBox and make sure you get to this screen that's right up there. Okay, so for a quick introduction, how about before I introduce, you know, what Julia is, we can get a show of hands-on. What do folks out here use? And maybe what kinds of things they do. It's a small enough audience, so we can maybe check how many people use. First, let's get like a static versus dynamic language is kind of, you know, like who's using C and C++. The days are gone. Okay, how about JavaScript? Like, let's go to the other end. Okay, it's half the room. Okay, Java, okay, and are there any Lisp users or, okay, that's a fair one. You guys use it for fun or for work? Oh, for Emacs, for work, okay, nice. How about, which one should we take a poll on? I mean, everyone loves in the audience when you do this kind of a poll. So Ruby, Python, of course. How about Python? That's fewer than I would have expected, typically. How about, should we go for dialogue, APL, dialogue? Shashi, you should raise your hand. All right, anything that I'm missing? R, of course. Do we have many analytics users, like people, you know, data scientists, or this is mainly a programming group? People who use R or Python for analytics, maybe? Okay, a few people. So it's a sprinkling of all kinds of people. So why Julia, right? I mean, that's the natural question that everyone has, that with all these languages around, why Julia? And this is the question we confronted ourselves in 2009 when we started our work on Julia, that we wanted something that was dynamic, that was easy, fun to learn, fun to use, but at the same time, you know, while being compact, it was also high performance. So think, you know, in one line, the way to say why we started the Julia project was, we wanted the performance of C with the productivity of Python, right? I mean, if you take those two extremes at some level, I mean, not that Python is the most productive language, but, you know, it's fair to say that C is probably the fastest language out there, for the most part. But if, you know, how do you take these two things and combine them together? It was, you know, by and large, believe that this was impossible. So even when I went to grad school, you know, I learned MATLAB and my thesis was a parallel MATLAB. And I was told that, you know, you've got to use all these tricks like vectorization and all these programming tricks to get going and get good performance out of dynamic languages. And the trick was always that, you know, you write your programs in this vectorized fashion and why do you do that? The reason you do that is so that you spend all your time executing code that is written in C libraries under the hood, as opposed to working in the language that you're actually programming in, right? So this is what we've always called the two language problem when we started the Julia project. The two language problem basically says that, you know, I will explore and, you know, I'll do all my exploratory algorithmic work in one language which is a nice dynamic, high level, very high level language. And then when I need to go into production, or when I need performance or I need parallelization, I will move to a different language, you know. So often the workflow is I'll use MATLAB to prototype and see if I want to get performance or deploy or I'll prototype in R and then I'll deploy in Java. Or if you're on Wall Street, it's often C++ that you hear about. So this was the background with which we started the Julia project. Just last week, we had, we released Julia 0.5, which is our fifth major release of Julia over the last four years, roughly. And we expect Julia 1.0 to be released next year at JuliaCon in the US. It's going to be in Berkeley this year. Well, I think it's going to be in Berkeley, but somewhere on the West Coast. And that's Julia 1.0 is about seven, eight months away from now, roughly, oh, maybe nine months away from now. And a lot of amazing stuff is expected to come out. Julia 0.5 is a foundational release for the journey towards Julia 1.0. It's the first release of Julia, which enables very high levels of performance for functional programming. Now, I'm not one of the language geeks who can debate with you about the finer points of functional programming. I personally come from a scientific programming background and I worry more about matrices and linear algebra and parallel computing. But I have some very interesting things to share with you about our latest release, which is Julia 0.5. And I think it's worth pointing out, it was on Hacker News, it was trending for a while, so this is probably not even readable. If you search Julia 0.5 in Hacker News, you'll see it, it was out here 24 days ago and that seems wrong, I think there was, there have been a couple of Hacker News posts. Anyway, so what makes it exciting is, let me point to our blog. So the last two, the most recent posts tell you about Julia 0.5. I mean, I'm assuming at this point that you already know what Julia is, that you've tried it out, you've done a few things in it and this is a bit backwards because we're going to cover all of that in the second half of this. But think of this as more of a preview of what all you could do with Julia and that's what I'm going to focus on. So this was a release announcement and we had a whole bunch of compiler and language changes. So the ability to write fast functional code, we got generator expressions, first class generators this time, we got experimental multi-threading, a whole bunch of improvements in arrays. We decided to go with APL style indexing and APL style consistency rules, although we haven't quite reached there yet fully. For all the people who care about array indexing starting from one or is not the ideal thing and that it should be zero, we just said, you know what, you could start from wherever you want. So we have experimental support from all kinds of indexing, sorry. No, one should be, no. Fixed starting point. Oh, there's actually, I mean it sounds a bit of a joke, but the reason why we went ahead with this was because there were too many applications which needed different things and there's actually a very detailed discussion discussing what needs to happen. It's effectively it is one based indexing, you have to start from one, but if you want to create interesting kinds of array types and you want to index into them using something that's not just a regular indexing strategy, you could start from, in fact, not just anywhere, but you could even have non-numerical indexes going forward should you want to. All right, so bunch of stuff, a bunch of other stuff. We had ports to the Arm and Power architecture, so Julia now runs on Raspberry Pi. If you have one, you can just download it from our website and control the robot or a self-driving car. Actually, I ought to show you the video of this. This is pretty fun. The second port that was just enabled was IBM's Power 8. So you could run Julia on any of the big IBM Power machines. We are pretty much scaling all the way from the smallest to the biggest now. All right, the other major improvement in Julia 0.5 was Gallium, which is the Julia debugger. So now Julia actually has a complete debugger that is written in Julia and is integrated with our IDE Juno. So this is junolab.org, it shows you the Julia IDE that is not distributed right now and what we are going to use in the demos is actually Jupyter, which is a notebook interface. It's easy to teach in, but if you want to do something more sophisticated and there's a full IDE, it's atom-based, it's open source, and there'll be easy downloads available for it soon. All right. You will cover the debugger, right? Perfect, okay. So why don't I focus a little bit on Julia 0.5 highlights and before that, I wanted to show this video. Okay, so let's go to the barkproject.com. Bark project is a robotics group at UC Berkeley, which is working on autonomous cars. Is this visible? You can kinda see it. If you can see this little car, it's about this big and it's racing towards what is a parallel parking slot and it drifts into it, all right? So if you're online, you can go to barkproject.com and what's happening is that Julia is running on an ARM board on this vehicle itself and it's using a very popular Julia library called jump. Jump is a library for mathematical optimization. It's used very commonly for all kinds of operations research today. In fact, it's a de facto research platform for operations research and Julia plus jump are doing real-time path planning on this vehicle. From this point onwards, it has actually even been put on full-size vehicles for path planning. Yeah, if there's a talk about this in our YouTube videos from Julia Kahn of this year and the person who developed this actually talks about what it takes to get it all going. So if you go to Julia Kahn, the Julia Kahn website, all the videos are linked out there. Okay, so it took us, while Julia was always a functional language under the hood and in the way it was thought about and created, functional programming was not fast in Julia until this release. So for example, if you had to pass a closure to map, yes, you could do it, it was all perfect, it worked well, it probably worked better than most other dynamic functional languages are as good as them. But now, with this release, map in Julia is as fast as writing a for loop, for example. So if I iterate over a large array and apply a function to each of the elements, until two weeks ago, I would have said, if you need performance, don't do it, write a for loop and for I in one, two, whatever, and have an array in which you are assigning the output and then call the function. So I think I have code out here that does that, yeah. So basically, I iterate over the array in a for loop and assign its value, call the function inside the loop, but today with Julia 0.5, you can use map and get the same level of performance as you did with the loop. This was an order of magnitude. So usually map, is it fair to say 10x or even 100x? 20 to 50x, somewhere between 10 to 100x, depending on what you're doing is the performance gain. And so what this makes it possible for you to do is actually start doing real high-performance functional programming in Julia today. Now, that's great. I mean, being someone who comes from more of an application or from scientific applications, programming perspective, every time people told me, oh, just use map for everything, it's kind of annoying, okay, to be honest, like, just use map for everything. No, I don't want to use map for everything and I don't want to use map reduce for everything. So one of the interesting things that happened in this release is also the introduction of a vectorization syntax along with map. Let me jump to it first because I think that's really exciting. So vectorize function calls. So if I have this function called clip, let's just say there's a function called clip out here and it's doing, you know, if x is less than low, it returns low. If it's high, it gives you high, okay. So it's a clip function and if I need, so the clip function actually works on values, right? X is a scalar value, but the function clip dot, which, you know, the syntax of dot parentheses actually vectorizes a scalar function call. So clip out here, you know, if you look at it here, it's, let's see if I can just run this, that might be the easiest thing to do. All right, so clip of 0.5 over one comma two gave me 1.0, all right, but now if I give it, if I give it a vector, right, and then I do a clip on this, that just works. So the dot parentheses syntax in Julia now automatically vectorizes any function call. Initially, for example, we had like a library of mathematical functions in Julia and we had all this sort of syntax to vectorize them and it was kind of a nightmare of an implementation, but you know, with map being fast and with dot parentheses now, we get pretty good vectorization that is in the language as opposed to sort of being in the library. So it's a huge amount of net code deletion and a lot nicer programming in many cases, of course not always. The other interesting thing that this dot notation does in Julia is that if Julia comes across an expression with many dot parentheses notation, so many vectorized function calls, it will actually use them at the parse level. So we also got, not only did we get fast functional programming by having fast maps, we also got the dot parentheses syntax for vectorization of function calls and fusion of this. So this is a huge amount of work and effort that has gone into this release. We have amazingly fast comprehensions now as an array comprehension as a release of this, as a result of this release. We have generator expressions now, which I'm not going to focus on, but if you are familiar with Julia, what is change out here? If I want to run this, I need this here. All right, so I created this array of tuples, for example, and this is the syntax, basically square brackets, expression, and then for what the ranges are. So that's how you create an array in Julia using an array comprehension. And generators make it much easier. So for example, let me just run this once more to get rid of the compilation allocations. Right, so this is the array version, the array comprehension, these square brackets here. What they do is that they call the function sum of t, for t in that vector of tuples, and they're summing up all the tuples in there, all the tuples in each row. So what that code is doing is just summing these things up and then summing these things up and so on and so forth. And the way it's doing it is actually creating a new resulting vector, which is the sums, and then it's calling extrema on it to compute the extrema right here. So as opposed to that, if I ignore, if I don't put the square brackets, I have a generator expression now. And with the generator expression, I have the benefit of passing in the generator to the extrema function, which then sort of accumulates the results as it goes along. And in first case, I'm allocating an output array, the second case, I'm not. And that is reflected in this allocation here. So when I time it, Julia tells me what memory was allocated. This one had two kilobytes, this had 200 bytes. This 200 bytes is because of, we are executing in global scope. So it's a detail, I'm not gonna worry about it at the moment. So just, as you can see, a lot of cleanup under the hood that leads to nice functional programming. The same, all those same benefits extend to dictionaries. I'm not gonna get into array indexing at the moment. It's just a lot of stuff that one has to think about. But if you're a MATLAB user, you're gonna hate it. If you're not a MATLAB user, maybe you'll probably not notice and think it's doing the same thing that you'd expect it to do. Any MATLAB users, I never ask that. All right, so no worries then. All right, so this is in a nutshell all the improvements that Julia 0.5 has seen and these are sort of the block improvements, although we've had a whole bunch of other improvements in APIs, in performance. We've also had a fair share of regressions which now the dot releases are fixing, but those are few and far and apart. With this base of having high quality, high performance functional programming in the Julia 0.5 release, we are now ready to race towards Julia 1.0 and that's what the plan is for the next nine months. We have an IDE, we have a good debugger, we have a profiler, we have all the foreign function interfaces for all the key languages you could care about. C, C++, Java, Fortran, Python, R, a little bit of MATLAB as well. So pretty much all of this environment's coming together very quickly. If you look at the package ecosystem in Julia, that's you can go to pkg.julialang.org and we are now at over a thousand registered packages, in fact 1137 as of today. So these are community contributed Julia packages in addition to any Python or R package that you could call from Julia already. So when it comes to doing any kind of numerical analytical programming, scientific programming, Julia not only is high performance by itself, but it can also leverage all these Julia libraries as well as libraries written in just about any other language with very little effort. Okay, so that's the Julia ecosystem as it stands today. There are some very interesting things happening in the space of machine learning in Julia, for example that I'd like to share with you. This is the Julia computing website, it's a company that I co-founded that with many of my creators and Shashi and I both work there now, but what I wanted to show you was this blog post. So machine learning in Julia, it's, this is a question everyone always asks me, so I thought I would just go there. There's a whole lot of support for machine learning in Julia today. You could, there's a lot of native machine learning libraries already. You could do, with our support for arrays and matrices being first class, you can write your own machine learning algorithms that run at incredibly high speed, but you can also have the same code run on GPUs with a very little effort. And that is by leveraging libraries such as array fire. So array fire will replace your regular Julia arrays in CPU memory with GPU executed arrays in GPU memory. So I think there's some performance numbers here. Yeah, there's GPU acceleration, but there's no chart unfortunately. I think for many of the array-based operations, we often see speed ups between five to 10x using GPUs, using array fire. Of course, the next step for us is to go for native code generation on GPUs. That work is in progress. It should happen in the next few months where you can actually start playing with, you know, write your code in Julia, press a button and it'll execute on the GPU. So if you're interested in machine learning in Julia, there's a couple of blog posts here in this GPU programming in Julia. So there's good stuff here. I think this video might give a good idea of how easy it is. So this is a simple image segmentation application. It's, this is Hurricane Katrina and what's happening is it's, you know, the Julia program that's running on this is actually classifying every pixel as cloud, land or water. Now on the left hand side is a real-time version of the CPU version of this algorithm. So it's taking this video, it's extracting a frame of the video on the frame, it's running K nearest neighbors and it's running a classification to pick how each pixel is. On the left was the CPU version, on the right was the GPU version. So let me just play it a little bit more. So you can see that as, even as the CPU versions kind of chugging along, the GPU version just kind of races through and finishes off right there. So, you know, this is one of the things that we like to think about, right? The right kinds of abstractions to work with stuff and arrays are a natural abstraction when it comes to working with scientific data or machine learning and having libraries like ArrayFire plug into Julia's, you know, into Julia's mechanism of multiple dispatch makes it very easy. So this is the code, I believe, effectively if, oh, this is not the code for this, this is an example, so let's see if the code's right here. Well, the code's not in this blog post so that's probably in a Git repo, but it's the same exact code, no change and runs on the CPU, runs on the GPU, just gotta tell it which one you wanna run it on. Okay, so apart from this, there are a bunch of deep learning libraries that have come out in Julia. We have a library called mocha.jl, which is a pure Julia deep learning library and it's pretty good. A very popular one is MXNet. If, are people here doing deep learning or should I maybe move on from deep learning? Okay, so not many. So there's three or four libraries for deep learning in Julia and the latest one is a very interesting one called Merlin. Merlin is a deep learning framework in Julia for the purposes of doing your natural language processing. So here's a cool demo that they have put together and, you know, we are at the food con, right? And in real time, it classified. I have no idea what all this stuff is, it's just kind of fun so I thought I'd show you guys. But the classification in the back is running in, the model has been trained in Julia on a very large corpus and in real time it's able to do this kind of classification of different parts of the thing. So, you know, the reason why I am saying all of this about machine learning is that the language is incredibly easy to learn, easy to use, extensible and that makes it easy not just to reuse. I mean, a lot of people come from Spark, right? And they've heard Spark's great for machine learning but really Spark's good for, you know, big data and all this other stuff but all you can really run in Spark are the few machine learning libraries that are provided, that are bundled within Spark, right? If you had to write your own, you're pretty much going to have to figure it out on your own and write it in Spark or pick something else. Julia actually is pretty composable and flexible which is why, you know, even in this short amount of time there's already four deep learning libraries in Julia. There's tons of machine learning libraries and this is an area of active research by no means settled and we have a long way to go. All right, so with this, I think I will hand over to Shashi. It does everyone have, yeah, sure. Right, so R is actually a really amazing language for statistical computing and the libraries make it hard to move away from and, you know, if your problem does not require you to move away from R, you should not. That's what I always tell people. But if you need higher performance, if you find yourself having to learn something like RCPP and write, you know, C++ code to get performance out of R, then you are a natural candidate for considering Julia. Another good reason to use Julia is, you know, now that Julia's been around for a while it has some amazing libraries like Jump. You know, if you have to do mathematical optimization, the kind of libraries Julia has are not available in Python or just about anything else. So that's a good reason to use Julia. If you need to go parallel, if you need, you know, really high levels of performance, if you need to know what's happening inside on the machine, Julia's your language. But if you have small data sets, R is already chugging along, that's fine. I don't think those programs are a good candidate for migration to Julia. Is that, I would say performance is the keyword, right? Yes. Yeah. Julia is in use by, why don't I hand over this to you so you can set up while I answer a couple of questions? So Julia is actually in use by, I expect it's about hundreds of companies worldwide. The ones that I know about are, you know, so Julia computing, the company that I work for has about 20 paying customers, most of them in the world of finance, but there's people in aerospace, a lot of governments, a lot of banking, a lot of federal, what do you call them, central bankers. So the New York Fed, you know, and so on and so forth. A lot of people doing robotics, just about everywhere. There's people using Julia. A lot of the use, I would say is segmented in, so the basic use cases of Julia segmented in three communities. One is academic and teaching, which is, I would say it makes up more than 50 to 60% of Julia, and that's tens of thousands of users out there. Right now on Julia box, I would believe there are at least, you know, dozens of students doing their homework at this very moment. The second one is a research community in universities or at corporate research labs, which are sort of the next wave of people who started using Julia, and that started happening about a couple of years ago. The last wave is industry, which needs, you know, more readiness to use Julia, and now you started seeing, you know, very early use in the industry, and many of the users who started early have actually already gone into production. So that's, I think I'm not gonna take any more questions. We'll do that at the end. Yes, so in fact, I know of at least groups at MIT, UC Berkeley, and Stanford, all using Julia for robotics applications. What does the one that I know about, right? I usually only know about 10% of what happens. Thanks for this. This talk will be a walkthrough of Julia itself, and I would like to show you how simple it is and how nice a language it is in itself. Although you may not care about the performance it offers. It's a very elegantly designed language. So if you guys want to follow along, you can go to juliabox.com and go to the tutorial page inside which there is this notebook, tutorial section where inside, so I'll just show you. So inside it is this notebook, 00 start tutorial. If you click on it, you'll get something like this. This is called a Jupyter notebook. The Ju in Jupyter stands for Julia. So Jupyter notebook is, it used to be called iPython notebook, now there's like, it works with Julia, Python, and R, so it's called Jupyter. So what you can do is in these cells, in these cells over here, you can write some Julia code and hitting shift enter is going to run it. Yeah, or you can click the play button as well to achieve the same effect, shift enter. Yeah, so if you guys have Julia box opened and don't know how to execute some code, ask where, happy to clarify, but if not, then we will move on to the first basic notebook over here. Okay, so Julia basics. So Julia is a dynamic language, but we use types for when they are necessary basically. So here's the example function called rand, and I'm creating an array of 100 rows and 300 columns by doing this. That's an array, and Julia has very nice support for linear algebra operations. So over here, there are like three things. So this is like the solve of a matrix with a vector, and then you have like eigenvalue functions and are right inside the base standard library. So that it's easy for you guys to start using it. And we have big integers, big numbers basically. So you can take, this is basically a complex number, and if you wrap it in big, it becomes like arbitrary precision. So you can call any numerical function on it, and it has good support for string, string manipulation unlike many other numerical computing environments which just ignore this. So this is a regular expression matching, and we have good unicode support in the code itself. So for example, this clip over here is actually alpha cap prime subscripted with two. So let's try to write that alpha tab, tab. My internet is slow, like I've titted through my phone. So anyway, hat, I think that's the hat. And then you can say underscore two to give a subscript two, and then you can say prime tab, and it becomes this. And this whole thing is just a single variable now. So I can reference it using that. So it's very nice to use this while writing code that you're reading from a paper, for example, some thermodynamic paper or like, you know, some mechanics paper which uses these notations. And you can define functions, some of the operators are caused to infix some of the characters. So for example, this is a unicode of a character, double less than, so I'm defining this to be less than point, like much less than basically, less than point one of y. So if I run this, it's going to give me the result. Okay, so false true to the result for that. And I can, yeah, this is another example of using the chronicle operator over here. And functions in Julia are just in time compiled, which means that when you actually execute something, only then the compilation happens. So when I just defined a function, as you can see, like Julia does not innovate on like syntax at all. It's very obvious. So you say, who, which takes an argument x, returns x plus one. It's the same as this form in one line. So if I call foo three for the first time, it compiles the function foo. And then second time, I call the same function. It's going to use the compiled version of it, which it caches in memory. And so over here, I called it with an integer argument. So it compiled the function for integer. And here I'm calling it with a float argument. So it's going to go and compile it again for a float argument. And you can also call it with a vector. And because this plus operation works on vectors, it starts to work. So as you can see foo is, without specifying the type of x, it becomes a generic function. So you can pass in any type of argument and see the output. So let's try to see, actually what is the code that is generated which runs on the machine, right? So Julia, let's you do this right away. So you can say code native. I want to see the native code that this thing generates of foo when I pass in an integer. Oh yeah, couple of types. So it's just a, where is the add instruction? That's that. Oh, I see, okay, wow, right. Okay, it doesn't need to add actually. Okay, okay, fine. Float 64. So if you give it a float, it's going to show you the code it compiles for a float. This is just the network lag. So that doesn't look correct. Oh yeah, add a C. But yeah, I think LLVM IR is actually more readable. And so we use LLVM under the hood for compilation. So as you can see, first it, oh yeah, it already converted the one literal into a float. So that's the basics. And then like we have a ton of plotting packages these days in Julia. So one of them is piplot, which is just matplotlib, which is wrapped using a module to call Python, which I'm going to show you next. So these are the types of plot you can plot. And the syntax is straightforward. You take an array of numbers as X, an array of numbers as Y, and then pass in the same arguments that you would to matplotlib, and then draw something. So I'm not going to explain more about the functions and their APIs, but as you can see, you can plot pretty complicated things, like surf, which is like a surface plot in 3D. And there are many more packages which look better even. Like there's one called Gadfly, which is written completely in Julia. Yeah, you can check it out if you like. So the next important concept in learning Julia is multiple dispatch, right? So what is dispatch? So in an object-oriented language, when you say X dot foo, and then pass it some arguments, what is it going to do? It's going to look up the class of X and go to that class definition, and then look up foo method in that class definition, and then call that foo method with something like this or self assigned to X, right? And then you do something with the X value. But in Julia, what you can do is, we don't special case the first argument. So if you think about it, X is actually the first argument foo, right? So in Julia, we don't do that. Instead, we do this, right? Method object arg1 arg2, instead of object dot arg1 arg2, right? Object dot method arg1 arg2. And the method is chosen based on the arguments, types of all the arguments, right? So this lets you cover a much more wider space in what you can specialize on, right? So for example, you can see methods of any function by passing that function to methods, this function called method. So it's going to list out all the functions you have for star. So there's a method to multiply two booleans, to float 32 numbers, to float 64 numbers and stuff. But there is, I think, no method to multiply two strings. Oh, there is a method to multiply two strings, which is concatenation. This was chosen because concatenation is not commutative and multiplication is not commutative either. But plus doesn't have the semantics. But you could define plus on strings. So if I run this cell, it's going to tell me that there's no method for plus on strings. So I can go ahead and define it, which is done in the next line. So I'm importing plus from the base module, which is the standard library of Julia, and then adding a method to it, where I'm saying that if I get two strings, two abstract strings, it could be, I don't know, UTFH string or ASCII string. I'm going to return this X, concatenated with a space, and then a Y. So if I now do hello plus words, it's going to say hello space words. So this is something you could do in C++, for example. You can have a method overloading or something. But in C++, it's static, right? So in Julia, it's dynamic. What do I mean by that? So if you have a function you defined beforehand with the operator plus, with the function plus, it's going to start to work. So for example, the fallback method for sum is defined using plus inside the standard library. We just defined plus for strings. Now I can add, I can sum a list of strings, and it's going to do the right thing. Okay, so you can think of type declarations in functions as basically specializers of what is going to happen when you pass different kinds of arguments. So this is much more powerful than single dispatch, I think. And the next thing we will look at is not a language feature per se, but implemented with the language, the ability to call C. Oh yeah, calling C itself a language feature and calling Fortran and Python were implemented on top of it. So you can just call printf like this. This is a symbol in Julia. You say I want to call this with, and it returns the integer type, C integer type. And I want to pass it a pointer of U int 8, which is this string. And Julia is going to convert this into a pointer of U int 8 and pass it to printer. And then return U the return value, which is the number of things that got printed, number of characters printed. So I'm defining a function called sign over here, which is just calling sign from the libm C library. So if you see, it's the same as Julia sign, the value. Because Julia also uses libm under the hood. So calling Python. So calling Python is as simple as importing this pi call module. And you say at pi import math as math, pi math, the at word is a way of calling a macro, right? So it's not a function call, it's a macro call. So what a macro does is it takes some expression and rewrites it in some other way. Here you don't need to know what it rewrites it as, but it gives you a nice syntax to import math library from Python. And then you can call cost on that and then see that it's the same as the cost in Julia. I'm not going to go into the further details of this. These notebooks were made by Stephen G. Johnson, who's a professor at MIT. And he wrote pi call and a pi plot, actually. So there's like a lot more interesting information about pi call if you want to read about it. So the next thing is something called interact. It's a bunch of interactive widgets, which let you play with your Julia code. So for example, say you're creating an array of random numbers, which are n into n in size. You can take the n from this for loop expression. So you can just take a for loop expression and affix this macro at manipulate. And what it's going to do is it's going to give you a sort of slider that you can use to change the value of n while this thing returns, if it's the password, okay. Let's try this. Maybe I need to re-log in. Yeah, interact was a project I did as part of GSOC and Stephen Johnson has written it like this. Okay, so yeah, if you see like as a very the slider, the n is going to get substituted differently. So I can get like any array of any size I want, so 22. And you can do cooler things with this. So in I Julia, any Julia object can be, you can specify a way to display it in a pretty way with HTML and stuff. So here is this, here I'm doing a lint space on colors. What is a lint space? It's a linear space of colors starting from black to whatever RGB I want to set. So if I say more red, it's going to generate a list of colors and show it and I can increase the number of things I want to see as well. And this thing applies to plots as well. So I can, for example, make this plot and then vary the parameters and see how it affects it. So I varied the beta over here. So this is actually rendering the plot on a server and then getting it back. That's why it's a bit slow. Yeah, so that's interact. So you can use it to present things really well. And then okay, so metaprogramming. So what is metaprogramming? It's writing code that writes code, right? So how many of you have ever used a macro or something like that? How many of you have written programs that generate programs that you later ran? Not necessarily using a macro. Awesome, okay. So you guys know what this means, right? So in Julia, Julia expressions are expressed as Julia data structure. I ran this expression, which says ex equals x minus 2y. And obviously x and y are not defined, so it's not gonna run. But I can quote that expression using this syntax, just the syntax for coding. And I get back this curious object, which if I dump, it'll tell me what is the type of it. It's an expert type and it has all these fields. It has a head and arts, which is an array of three elements and it contains minus, which is the thing being called. And the first thing is a symbol x. And the second thing, the second argument is actually two into y. So in Julia, when you write two y, it's actually a multiplication operation. So it checks the position is multiplication. So this is the expression that we have here, right? So let's try to write a macro. So what does a macro do? It takes an expression of this form, gives you back an expression of another form, and runs the transformed expression instead of this expression. So let's see how that works. So in Julia, when you have a string of code, it's first parsed and that results in the ASD, which is represented with this expert object. And then that goes into the macro and then macro returns a new expert and then that expression is compiled, right? So here's a simple macro which takes an expression and then flips the argument, right? So this is our column macro at reverse and that's the macro definition over here. What I'm doing is I'm checking to see if the expression is actually a call. And then if it is, then I'm returning a call expression where the first argument remains the same. First argument is always the function being called and the rest are reversed, okay? So reversed, otherwise I just return the same expression if it's not a call expression. So when I say one minus four, it's a call expression with the minus as the first argument. What is going to happen is it becomes four minus one and that is going to get executed. So I can see actually this returns four minus one by quoting this and then calling this thing called macro expand, which Julia exposes to see what actually happens when you expand the macro. That's how a macro works. So that's a very simple macro, but as you saw, you can write complicated macros like at manipulate that I showed in the previous example. So here's an example of a use of macros. This is the hardener method of expanding polynomials. So if there's a polynomial of the form c naught, c one x plus c two x squared plus so on till c n x to the power n, you can use hardness rule, which is basically c naught plus x into this expression. So what ends up happening here is that the number of times you calculate x to the power k is going to become just one. In this case, you're going to calculate that for every element, but over here once you've calculated x to the power k, you only you use that to calculate x to the power k plus one basically. So you can use that to write code like this, where this is the variable and this is the coefficients of the polynomial. And then you're calculating that and then dividing it with another polynomial. So this same function is implemented in sci-fi and let's compare the performance of these two. So, so I'm running the code over here. Inf is the version we implemented and s.inf is coming from sci-fi special functions. So as you can see, it's like, I don't know, some five times faster or maybe more eight times. Ish. So you can use macros to get performance as well. It's also a tool for making your life easy as a programmer to solve whatever problem you have. Can everyone see this? Okay. So this is the Julia reference. The same as the other one, but like faster. I see, okay. What did I do about that? Okay, now, can you see it now? I have this full screen, very rigid, not configurable screen thing. Okay, can you see it now? Yeah, okay. So that's the Julia reference. So there were some things I wanted to show. Julia has excellent introspection tools. Like I said, if I'm doing one plus one, for example, I want to see which function is actually getting called. Like which special form of plus is getting called. So to do that, I can do this thing called which, plus, which, plus, okay, int comma int. And it's going to tell me where it is. It's defined in int.jl inside base. How much time do I have? And there is a macro which, which is an easier form of that. You have values, and the macro is itself going to take out the values, types of those values, and then give you this. You can also see what happens when you add a complex number with a real number, right? So I actually want to see what happens like. So I want to see the code. So I can just write, I tied it. It takes me to nano, which is not good. Okay, I'm going to set this to Julia and come back to Julia. Okay, so if you, if I set that environment variable, it's going to use swim, which is what I use. And it's going to take me directly to the line that defines that function, right? And it's basically just take the real number and add it to the real part of the complex number that you gave me. And then keep the imaginary part as it is, right? So you might be wondering how did the complex type come about, right? Like I just wrote this. So the answer to that is that M has a curious way of being shown on the screen, it just spins in. But if you look at the type of it, it's a complex number of booleans, right? And I can dump it and see that actually a complex number with real part is false and then the imaginary part is true. So this is how you represent the unit imaginary number. So when, as I showed you before, if I had X equals 21 and I say 2X, it becomes 42. Similarly, if you say 2M, it becomes, it basically calls this function, which is real into complex number, right? Which I can also see, but I'm going to skip that. So that is how you start to work with Julia code, right? Like you try something and you want to find out how it's actually implemented and you go inside. So I already showed you code, LLVM and code native. So the other thing is the Calium debugger, right? So entry point to the debugger is something like this, 12 comma four. So I want to see how GCD of 12 comma four is executed. So I just say at enter GCD 12 comma four and it's going to take a while the first time because it's compiling a lot of code, yeah. So here's the implementation. It's that method we learned in college. Check to see if A is zero, if it is written the absolute value of B, but A is not zero. So let's step in front and see if B is zero, it's not. Okay, so you can step through code like this. An interesting thing to step through is this, right? One plus one point zero. So here, what are we doing here? We're taking an integer and a float and then adding them. There should be some mechanism to do this, right? So in C, for example, this mechanism is built into the language, which is one of the, the integer argument is going to promote to a floating point argument and it's going to add to a floating point argument. But in Julia, all this is defined inside the language. Here's the catch all method for adding two numbers. It says that, okay, call plus on the promoted values of X comma Y, right? So we can actually try to see what promote returns when given one comma one point zero. Simply returns one point zero and one point zero. So if I give it two, one point zero, returns two point zero, one point zero. Let's go back to the debugger and then step inside. If you step inside promote, how does promote itself work, right? So it has this thing called promote underscore type, which the user can define promotion rules with, right? I just defined a method for promote underscore type with my two argue two types and tell what type gets returned. So it calls this function called convert on the promote promoted type and then converts the arguments and returns it. So this might be very detailed, but I still want to show you what are the things that actually happened when I stepped through this. So as you can see, there are like a lot of steps going on over here. And ultimately an LLVM intrinsic is called, which, and finally I came out of this whole, I don't know, a whole stepping through of all the code. But if you look at code native of one plus one point zero, Julia was clever enough to remove all that abstraction and then give you this very fast code to add to numbers basically. Yeah, so that's like, that's a good example of how the compiler is good at removing your high level abstractions and getting right to the hardware when it actually runs the code. Yeah, so I think that's what I had in mind to show you guys, but if you have any questions, I'm happy to take and show you something, if not, yeah. Oh, sure, yeah. So there is this thing called async, it's a macro. So let's say sleep one and then print hello, yeah. Did you see that? So that actually happened in a different process and then printed hello. So I can have any number of these. No, it's a green thread, it's a lightweight thread. So three hello's got printed after a second. So you can do that. And when you have something that is, a task that is running in the background, you can assign it to some variable and then wait on it later in another task maybe. And then what happens is it waits for that task to finish. Yeah, please. Yeah, right, okay. 10 million, okay, and 10 to the power seven, okay. Some average, yeah, I mean. Yeah, so the first time was the compilation overhead. The second time it has no allocation, for example. Oh yeah, let's see. Can you do it with mean? I don't think you can, let's see. Yeah, mean is kind of a weird beast. I, for, I equals 100, yeah. For, right, okay, it does work. And it does allocate, does it allocate? It does, but it's supposed to allocate 80 MB. Right, yeah. So you can index using Booleans. I'll show you what we actually changed, but before that I'll also show you Booleans and of Boolean, does it work? So some of the values got taken out. Is this what you wanted? Let's try this. So yeah, that should give you my matrix. But the thing that we changed is, so if you do this now, you get a vector. And in the previous version, you were getting a, you would get a two-dimensional matrix, which had, I have that as well. So that is 0.4.5. So if I do VAND 10, 10, and then A of 1, what, okay, yeah. So I used to get a row back, but in, this is Julia 0.5, this is a new behavior. Is this, and then if I do this, I get a vector back. I thought, like, this is basically reducing one dimension. Over here, you see, dimension is actually a type parameter and this is two here. It's a two-dimensional vector. So if I do this, I get a vector back. This is two here. It's one here. And, oh, right, yeah. So, but you can get the older effect by just passing a vector of one index. This is the APL style change that was, and turns out this is actually less number of rules than the previous one. Write more code than you should be writing. No, Julia is implemented with C mostly, using LLBM as the backend for compiling to machine code. Oh, that is using LibUV, Lib Unicorn Velociraptor. It's a thing developed by the Node.js developers. So we use that for the event system everywhere, including IO and stuff. So anything that blocks goes through LibUV. A Boolean, a Boolean is a byte size, but you have this thing called bit array, which you can use if you want to. I don't know how to construct it, maybe this. Yeah, this, if I do a size of, I don't know, size of, yeah, 16 bytes. But if you just generate an array of bools, and I think it's gonna promote it to integers. Bit array, okay, integers, I don't know. No, it doesn't. Yeah, we can see what happens there. Yeah, that's not good, is it? Yeah, it would need to become an integer. I guess that was the reason. I'm sorry? Oh yeah, sure, it's on the website, actually. If you go to julialang.org, I can show you a plot slash benchmarks, I think. So these are various benchmarks, microbenchmarks, although, and these are languages, and this is multiple of the time taken by C, okay, and this is a log scale. So if you look at Julia, Julia's over here, very close to one, which means as fast as C, and on one or two of them, it's faster than C. And surprisingly, JavaScript is mostly fast, and Octave is the slowest, I think. MATLAB is over here, R is over here. It's very good. My work, I mostly use it for non-numerical stuff, so I really love it. Like multiple dispatches, it saves a lot of code, like you'll understand it once you start using it, because when you annotate types in a function, you're actually adding behavior rather than just, like using it for type safety, right? That's any other questions? Okay, thank you.