 Thanks, Narish. And I think I did only a small thing by connecting Narish and Sheamus together. But Narish has been running very high-quality conferences in Bangalore over the last few years. And it's my honor and privilege to be here with all of you guys. All right, so without further ado, I will jump into my talk. Some of you may know me as one of the creators, co-creators of the Julia language. And what I'm going to do is something that's slightly different from the rest of the talks today. I think there is an enormously amazing roster of application stocks. And one of the things that I like to personally do is focus on the abstractions and the fundamentals under the hood and why we build Julia and what we're doing with it. And hopefully set the stage for maybe some of the things that are going to come forward in the next few years. Some of the things that maybe will drive the next round of innovations. So I always like to start out with a show of hands to just see, this is a personal poll I collect in every conference that I talk at. How many people out here are using Julia or have done one plus one in Julia in any way? OK, I see about five hands. OK, that's great because if I'd done this last year or the year before, it used to be zero. So I say that that is enormous progress. How about, so how many folks out here are using Python? It's not everyone, but it's like 90% plus. How about people doing C++? That doesn't seem to be very popular. How about Java? OK, so a few different hands and JavaScript. OK, what about any other things I should ask? R? How many are there? I'd say like 20% maybe. OK, what else do people use out here? Scala? I think you're doing better than Scala with Julia. APL, someone wanted to check with APL. I don't think. OK, I think that the results are not surprising. Everyone around the world is now using Python, and I don't need to re-trade why people use Python, but it's good to refresh ourselves. So Python is an amazingly productive language. It's really fun to use. R started out its life as a statistical language. But what I'm showing you here is a list of about 40 languages that are all dynamic languages. So you don't see C++ and Java on this list. And when we started with Julia, this is what existed before us. And these are all dynamic, jittered languages that have been around for a long time. But yet we've been unable to solve the two language problem, and that's why we started Julia. The project is now about 10 years old. We started in 2009. And the purpose was very simple. We wanted something that was as easy as Python or R, but as fast as C++ or Java. So have the same language for your prototyping and the same language for production. And don't write the same program twice. Hence solve the two language problem. And it's been 10 years now. We released Julia 1.0 about a year ago. And the growth has been simply staggering. And I'll share some metrics with you a little bit later. I always like to show this price slide a little early on, because a lot of people think that if you've got a prize, then maybe it's worth knowing about it. And maybe it's something good, right? And I've been really excited that Julia got the James Wilkinson Prize for numerical software. We got this prize early this year. And it's a very scarce prize. It's only given out once every four years. And we accepted it on behalf of the Julia community, and we've been excited about it. My colleague Kenna Fisher was chosen for 30 under 30. Professor Alan Edelman has received a bunch of prizes for his work on Julia. But people who've been writing packages in Julia themselves, the package ecosystem in Julia is now beginning to be recognized by the professional communities in the form of prizes. And Miles, Joey, and Ian are the authors of a tool called Jump, which is an amazing set of tools for doing mathematical programming. You think of linear programming, quadratic programming, all these things, logistics. And now I'll talk a little bit about it as we go along. But it's just incredible to see all of this coming together. Now to set the stage for my talk, I'm going to do two things in my talk. I'm going to motivate why we build Julia and where we've come, but also where we're going with Julia now. And I think that that should give you sort of a good feel for both the history and the future. The future, of course, is software 2.0. Does machine learning really need a new programming language? It's been our thesis that the machine learning world needs a new programming language. And hopefully, by the time I've finished my talk, you're going to believe me, or maybe you'll come and challenge me in the break afterwards. But let me provide some quotes from a few people. So Jan Lacoon just recently received the ACM Turing Award for his work on deep learning. And he said, this is a tweet of his, that deep learning has outlived its usefulness as a buzz phrase, deep learning is dead, long-lived, differentiable programming. And what does this phrase mean? If you unpack it a bit, deep learning is basically powered by two things, automatic differentiation of neural networks and running your code at incredible speed on GPUs or TPUs or whatever new hardware comes along. So the ability to throw amazing amounts of data at a neural network which, through the technique of automatic differentiation, is able to minimize your loss and then have run it at extreme speeds on GPUs and TPUs is what made this whole thing possible. So the question that naturally comes up is why restrict ourselves to just neural networks? I mean, there are only one form of program that you could do automatic differentiation on. Why not do it across a larger set of programs, a larger set of program structures? And that's what differentiable programming is about, that can we generalize beyond neural networks, which have been very successful so far at solving vision, speech, NLP, and various other unsolved problems in computer science, but can we go further? And that's what I'm going to motivate in the rest of my talk. Andre Carpathy famously wrote the software 2.0 blog post saying that instead of writing programs the way we do today, requirement by requirement, function by function, there's going to be a new world where through training models using lots of data, we come up with our new programs. So today, when you do computer vision, you do not write a new program for it. You train your model so that it recognizes the objects you care about. Can you generalize this to many more things? Can you generalize it to beyond learning, to beyond vision? And in order to do that, in order to make some of these techniques that have come up in the world of deep learning, what we really need is a programming language approach, because eventually, even if models are going to be code, even if we are going to train our programs using data, you eventually need a broad platform, a broad set of abstractions, in which you program in this new way. Chris Latner, who's now the senior director of TensorFlow and TPUs at Google, has written an enormous amount of stuff on this topic. But one of the interesting things that they did at Google was they wrote down an evaluation of 12 languages. If machine learning needed a new language, what would it be? So the answer is definitely not Python. If you're using Python, you're not really doing your machine learning in Python. You're actually calling a bunch of C libraries under the hood that are actually doing the learning. And now crossing these language boundaries means you lose a lot of information across language boundaries. You can't open up the black box. You can't see what's happening under the hood. If you're using TensorFlow, you're not really using Python. You're using the TensorFlow language. You're programming the TensorFlow syntax tree using the Python language, just like you could do it using Julia or R. So as a lot of these applications have come together, it's been increasingly clear that you need a language approach to machine learning so that you can do all these new things in interesting and amazing ways. And Chris Latner did an evaluation along with his team of 12 possible languages that could be the future for machine learning. And they ruled out a lot of them. I would encourage you to find this blog. But eventually, they ended up with two languages, Julia and Swift. And being a creator of Swift himself, he was naturally oriented towards using Swift as the language for machine learning. But we obviously think that Julia is the right system for the software 2.0 world, for being the language of machine learning. And the reasons are simple. Like I said, you need only a few things. You need all your mathematics and your foundations and your libraries and packages. You need a system that can be automatically differentiable and you need the ability to run on GPUs and TPUs. Julia takes all the three boxes today. We presented some of our work at NeurIPS last year. We're going to present some of it even this year. And we presented Julia running on TPUs. Julia has a differentiable programming system. And all of this coming together, building a flux deep learning library. So it's a full stack approach. It's not something which is gluing together existing bits of functionality, but a complete system designed for the ground up that provides you the best abstractions for machine learning or for differentiable programming. I assume that since everyone here, this is a data science conference. I'm not going to talk too much about what automatic differentiation is. And I'm just going to show this very quick slide here. So what does automatic differentiation do at the end of the day? It's applying the same rules of differentiation that you learned in your 11th or 12th standard. The chain rule to be clear. So here's a simple function on the left-hand side. And on the right-hand side is the gradient of foo, which is evaluating the same function, f. But then there is a source-to-source transformation where all these lines have been analyzed and also evaluated. So you get the f, but you also get the gradient of f. So this is a source-to-source transformation system that gives you the gradient of any function f. Now remember, until today, if you were doing deep learning, if you're using TensorFlow or PyTorch or any of the systems, you would not be doing something like this. You would have a neural network on the left. And on the right, you'd have the neural network and the gradients that come out of the back propagation there. This is a generalization. So here I can have any function f. And then on the right, through transformation by this tool called Zygode, you get f and f prime. And I think intuitively, all of us know that if you can calculate a gradient or a derivative, you can optimize. You can write a loss function. You can compare what your input data and your label data says with what your predictions are. You get your loss function. You have your derivatives. You can now minimize the loss and now come up with an improvement of the model. The same recipe, but with these tools, you can apply it to not just neural networks, but just about any Julia program. The key is that the program is written in Julia, because the Julia compiler can analyze only Julia programs. But what happens when you go beyond? What happens when you cross language boundaries? It gets really hard. And this is why it's important that you have a system that has been written in the right level of abstractions all the way so that you can apply these tools like automatic differentiation and do these source-to-source transformations to compute the derivatives. We published some of our work in a recent paper on archive. It's called Partial P, which is a differentiable programming system to bridge machine learning and scientific computing. And I will leave it to all of you to go back and look at this paper. It's very easy to find if you Google it. And one of the things that we did in this paper is, I don't want to discuss what the paper is about here. I think all of you are well qualified to just go and read it for yourselves, just like I'm not going to spend too much time teaching you Julia or any of the code or applications, because you're all going to go back and figure it out for yourself. What I hope to achieve here is to convince you that you should. What we did is, in this paper, we described the system that I showed you before on the earlier slide, the Zygote system. And we applied to five different applications, only one of which is a neural network, the ResNet computer vision system. Otherwise, we've applied it to a quantum computing problem. We've applied it to a probabilistic programming problem. We've applied the system for a simple self-driving car demonstration and a differentiable renderer. So we're showing all the possible applications, only a few of them to start with. But what the world might look like once you go beyond neural networks and the kinds of applications you can do. And remember that neural networks do not understand very much about the problem you're solving. It's the same basic structure that is applied across a broad range of problems. But what if you could bring some of your domain expertise into the problem itself? Intuitively, I would think that you would need lesser data. If I can bring my domain expertise, something that I know about the problem into my AI, into my machine learning loop, I will be able to train with much lesser data. I'm not going to look at parts of the search space that are irrelevant because I have a larger deeper structure. A good example is physics. Physics always describes what's going on. There are laws in physics. If I drop a ball here, it's going to go down on the ground, it's not going to go up onto the ceiling. And if I had a neural network, it's going to try going to the ceiling and off into space and all kinds of things, but eventually it'll figure out that it's going to go on the ground. But maybe if I encode the laws of physics into my training loop, I will probably only look at a meaningful set of solutions. And it's a very simple example, but the hope is that I'm able to motivate why we need to do some of this stuff. All right, so we've spoken a little bit about neural networks and automatic differentiation and the need to do differentiation of general purpose programs. The second part of it is running on hardware. So the only way you can get the performance is if you can run on GPUs, Julia already natively runs on GPUs. So this is actually a Julia program that is writing a CUDA style function. And on the right-hand side, you have the assembly that gets generated out of it. And it's just what it shows you here. So this is the performance. So Julia in CUDA is as fast as C in CUDA. So being a Julia programmer, you don't have to be a low level C programmer knowing everything about the GPU you're running on in order to get performance. In Julia, you can go all the way down, tweak every nut and bolt inside the system, or you can stay at a higher level of abstraction and work at the level of mathematics. But these are some benchmarks that show that we are doing as fast as CUDA see, and in some cases even faster. Julia also runs on Google's TPUs. These are some of the most purpose-built chips out there for doing AI computations. And what's shown in the picture here is a TPU pod, and we were able to run ResNet on 512 TPU cores running at impressive speed there. Of course, the pf16 per second is a flop rate, so it's petaflops. Everyone knows what a peta is, right? It's 10 to the power of 15, which means about a million times faster than maybe 100,000 times faster than my laptop here. But the 16 is important to note, right? It's not a double precision or a single precision floating point value. It's a half precision floating point value. And if I have some time in my talk, I'm going to actually maybe even show you a little bit of demos of the fun things you can do with precision that you may or may not have seen before. All right, so Jeff Dean looked at this, and he was like Julia and TPUs. That's easily the future of machine learning, because everyone here knows Jeff Dean. I don't think I need to talk about who he is. But I really think that now that we have the right hardware, what we need is the right programming language and software abstractions in order to really take the possibilities offered by machine learning by software 2.0 and take them to their logical conclusion. All right, so putting all of this together, what we did is we created a deep learning framework in Julia called flux. Flux is the last thing on this slide. If you look at TensorFlow, it's written mostly in C++ and very little bit of it in Python. In fact, like I said earlier, when you're programming TensorFlow, you're actually writing a new language. You're not writing Python. It's a new language that you're calling from Python. PyTorch is actually a lot more Pythonic. But all the deep learning systems that have been popular until date actually have a huge component of non-native code that makes it very difficult for you as a data scientist to open the hood and to customize it and to understand what's going on. You simply can't figure it out. The flux library is 300 or 400 lines of Julia code at its core. That's all there is to it. You can open it up. You can change it. We have students from universities around India who come in and contribute to Flux, contribute new models, improvements, performance improvements, bug fixes, without any significant amount of training or understanding of the code base. And I think that's the true test of the flexibility that Flux offers. Now, so all this is great, right? So you may say, OK, I believe now, having heard this Viral guy, that, OK, maybe we do need a programming language for machine learning. Maybe we do need all this differentiation capabilities. Maybe it's nice that it runs on the hardware. But I have a real problem to solve today. And how am I going to do it, right? At the end of the day, it's all about the ecosystem, the environment, the size of the community. And Julia has a fantastic community and a fantastic set of packages. So these are just some of the packages that I'm showing that have designed cool logos. So they look good on a slide. We have some of the best differential equations packages, some of the best deep learning packages, operations research, graph processing, signal processing, data science, biology, image processing. And this is just a small set of these things. Our goal is to make all these packages fully differentiable so that you can combine your domain specific knowledge in any of these fields with neural networks, with AI techniques, and actually do differentiable programming on just about any problem that you care to solve about. So it's our goal as programming language designers not just to build the platform, not just to build the language, but to actually build a complete ecosystem that works for you so that you can focus on what you need to do. Let me give a few examples. I'm going to breeze through some of these slides so that I can get to some of the demos and stuff. So here's an example of a neural network where the layer is not actually a typical fully connected layer or a convolution. It's actually a differential equation. It's a neural network where one of the layers is a differential equation. And it's trying to actually solve a very simple problem. Its technical name is the Lotka-Volterra equations. But what it's actually doing at the end of the day is predator prey, foxes and rabbits. If I have imagined this room to be a grid and every grid cell to be a fox and a rabbit, you can write simple rules of if a fox is next to a rabbit, it eats it, if two rabbits are next to each other, they reproduce. And then you have dynamics of the system evolve. And you can now think of training a neural network to produce the same dynamics as opposed to writing the science. But in order to train it, this is what you end up doing. So here's an example of a neural network where the ODE is part of the neural network, where the equations have been encoded inside the neural network so that when it trains, it trains really quickly. And you saw the red line starting out with a random starting point and then training the neural network to match the data observed. If you didn't have the ODE in the loop here, then this would have been significantly slower and longer. This is a story around composability that I really love talking about, because it's something that you won't see in any language. And this is basically a motion of a pendulum. So all that's happening here is we start out with some constants, like the gravitational constant, the value of g, the length of the pendulum, and then the motion of the pendulum. But unlike a typical simulation where you only simulate the known values or the observed values, this simulation's actually using the observed values and uncertainty in those observed values. Like maybe I don't know my value of g perfectly. It's 9.79, but maybe it's plus or minus 0.02. And maybe when I measure the length of the pendulum, maybe it's just off by 1%, because I didn't have a perfect measuring tool. And so the amazing thing about this is that Julia has two separate libraries. One called differential equations that I showed you before that does the simulations like the Lotka-Voltaire equations. But then there's a tool called measurements.jl that lets you express these uncertainties in calculations. Now the authors of these two packages had never spoken with each other, but Julia, being a well-designed language with the right abstractions, was able to compose these two things together without the authors having the intention of making it happen, where they designed their own packages. So you can now take these values with uncertainty, push it through the differential equations package, and get a plot, which actually plots the uncertainties in the answer. And so you can see that the error bars on the plots. So they came out automatically, not because someone programmed the system with error bars and calculated the errors manually, but actually because in the inputs itself we had the errors or the uncertainties, and they got automatically propagated through a calculation that was not originally designed for that purpose, but it composed well and came out automatically. So a lot of these things have amazing applications in finance, and that's an exciting new field of research. We applied this to a very large astrophysics calculation. So Celeste is a calculation that ran on half a million cores on 60 terabytes of data analyzing the entire scan of the universe to note every observable star or galaxy. And we needed a custom machine learning algorithm in order to pull this off. The reason being that while that looks like a galaxy right there, and that might look like maybe a star, what about this little red dot here that's in the middle of that circle of the oval? Is it a star that's really far away, or is it just because the camera has some kind of a limit, it's operating at its detection limit? And because of that, we had to have this specific machine learning algorithm that had to very carefully decide how to label these objects, and which ones were noise, and which ones are actual observations. So everything I have here, the slides are going to be shared. I know that it's a lot in one talk, but my goal is to share with you that we started out with very humble origins with something very small and simple, which was the Julia language. And over time, we built these abstractions in the form of libraries, packages, compiler tools like automatic differentiation, machine learning packages, and then all the way up to solving problems that are not just run-of-the-mill problems, but problems that are incredibly difficult to solve and require all of this machinery in order to get there. A lot of custom matrix structures and storage types used there, but I'm not going to go into it right now. This is a particular problem that I love talking about. So you might remember, recollect, that I spoke about the jump system for linear programming or mathematical programming early on in my talk. And you might be wondering what's the application there. Jump was the team that got the prize, if you remember, on my prizes slide. And let me motivate this application. So the Boston Public School System put out a challenge. So in the US, every school is required to provide transportation for kids to be brought from home to school and back. And there are lots of constraints in the law that says you cannot have the kids on the bus for more than an hour. And you have to pick them roughly close to their home. It can be like a local bus stop for the school bus, but if the child is handicapped or has some kind of physical disability, then the child has to be picked up from the doorstep. And this entire system was costing them a lot. And the public school system said, oh, you know, we've heard of this new data science thing and how like throwing data and computers together at problems will somehow optimize everything and save us a lot of money. And instead of trying to solve it themselves, they did a smart thing, they put it out as a challenge. And a team at MIT actually used the jump software to solve this problem and resulted in a savings of $5 million for the school system, right? Just change the bus routes and you say $5 million. That's a good deal, right? So that was featured in the Wall Street Journal when this came out. And it's been an amazing case study about society and data and compute all coming together to solve important and interesting problems. So note that you couldn't just throw a neural network at this problem and have it somehow find an optimal solution. But in this case, it's a logistics thing, right? You know where the schools are, where the kids live, where the buses have to go, what the map looks like. So you are able to sort of frame it as an optimization problem and solve it. Which is, by the way, what Uber is solving and all of our logistics people are solving now in every second of every day. This problem turns out to be a lot more complex than ordering an Uber, by the way. So this actually was a very sophisticated solution. And we need more of these things, right? We need more of these, we need everyone around. I mean, maybe someone needs to apply this to our Bangalore traffic problem here, right? So we have a long ways to go and tools like this I think will help us solve some of these hard problems. Personally, for me, the reason why I built Julia was exactly to solve problems like this, right? You can always solve any problem in any language, right? So why do you need a new language? Only to raise the level of abstraction and performance to solve some of the most challenging problems. And in order to do that, you have to build everything that's coming along. So let me talk about a few more examples. There's a new climate, everyone's aware of climate change and everything that's going on. The summers just keep getting hotter every year and I don't think I need to talk about the climate change is happening. What's happening in the world of climate change and climate modeling is that the old models have to give way to new ones in order for us to model the climate accurately and with good precision. And a team from MIT and Caltech it's called the Climate Alliance or Klima for short is using Julia and GPUs to run climate models at unprecedented scale. So they're building completely new models of the ocean, the atmosphere, the land and the sun to model the heating and cloud formation. And it turns out the cloud formation is an interesting part of the story because if you have more clouds in the atmosphere, you're going to reflect more sunlight and then have less heating. And so in order for getting this right, you have to have an accurate model of the atmosphere, of the land, of the oceans and of the heat that's coming down and you have to do it for every little cell that you can model the earth on. And this is an amazing problem which is being solved completely in Julia. The codes on GitHub by the way, so you can try it out. Similar applications in energy production and decarbonization of the grid. So again, these are just fantastic new Julia applications. In backing and finance, Julia is being used to solve a number of different problems. So Aviva for example uses a Julia system in production for all of its risk management. So every time you buy a policy, there's some risk associated with it like there's a probabilistic payout and the insurance firm has to have enough assets in order to make that happen, right? Otherwise the policies can't be paid out. So the regulators require that an insurer aggregating across all their policies holds enough capital in order to meet any things that come up. And so that's the risk management system and Aviva Solvency II compliance system is actually a Julia system. It replaced an IBM algorithmic system that would have cost millions of dollars and run on a million dollars of hardware was replaced with a Julia system running on five AWS nodes and written by someone who had never even written Julia before. Similarly BlackRock uses Julia for a lot of its time series analytics. The New York Federal Reserve uses Julia for a lot of its models of the economy. So note that these are not AI applications. These are not machine learning applications. These are sort of deep domain applications and all of these applications are now beginning to integrate machine learning and differentiable programming in new ways as we go forward. Another application domain that we are working on is this partnership in personalized medicine. So a very specific problem in personalized medicine is precision dosing. When I get an infection, I'm in, checked into a hospital and the question is what level of the drug to give me in order to fight my infection, right? If I'm given too little a dose, the bacteria might mutate and I might be in trouble. If it's a very large dose, it might actually affect my organs and then go into all kinds of other complications. And so it turns out that giving the right dose is essential and important. And today the way the pharma industry and the drug industry and the hospitals and everything works is there is a single dose, right? You might be given a dose if you're an adult male or an adult female and there'll be different doses for children, but that's about it, right? But I think intuitively all of us know that maybe you need the right dose for the right person based. That's personalized to your body and your condition, right? And this is something that we're doing with the University of Maryland at Baltimore. They have a school of pharmacy there and the professors are Joga Gobru and Vijay Evaturi who are working with us to develop this precision dosing algorithms that combine systems of ordinary differential equations that I talked about before with machine learning so that you can come up with accurate doses by the bedside of the patient and have better medical outcomes. All right, so I will... Can I have a time check from someone who's keeping time? Nine more minutes, okay. All right, so I think I might not be able to get into my demos, unfortunately, but I'm going to skip through a few slides. So a lot of what makes all this possible is composability. You've heard me say the word composability a number of times. Why is composability important, right? The reason why it's important is because it allows people who are experts in their topics, communicate without actually directly communicating, but communicate through the platform, through Julia, with other experts. So for example, someone in Julia wrote all these different kinds of numbers, right? So Julia does not just have integers and floating point values, but complex numbers, coturnians, rational values, fixed point numbers, non-standard precision intervals, right? So the numbers that are representing intervals, not just particular point. Units, symbolic numbers, dual numbers, and polynomials, all of these are number types in Julia, right? There is no way anyone in this audience would know all of this stuff, but you're probably using many of these things under the hood in packages and libraries that you're working with. So some experts wrote this. Some other experts came and wrote, you know, all the areas and tensor processing libraries. You know, someone else who knew numerical linear algebra wrote those. A different person wrote differential equations, convex optimization or parallel computing, or machine learning libraries, all this stuff, right? And the only way you can get scale is through composability, right? That if I write something that you can leverage, that you can build on top of, or even better, if we both write something and a third person can come and combine that to get something that is greater than the sum of the parts, that's how you get scalability and the solution of really challenging problems. And we believe that Julia has been reasonably successful in solving the composability of libraries. So remember I talked about the differential equation library and the measurements library, how error bars could propagate through everything. We feel that we solved composability for libraries, but the question about composability in machine learning is still unsolved. And while this is a good starting point, we have a long way to go. Machine learning actually puts great amount of stress on compilers and languages, much more so than anything ever before. These are all the kinds of things that you want to do, but the optimizations are at a level that have not been seen in traditional programming languages and the nature of hardware that you need to run on is simply nothing like what we've seen before. All these years, you could have bought, until five years ago or maybe 10 years ago, you could have bought a new processor every two years and your single threaded, single core program would have just run at twice the speed every couple of years, right? And Moore's law was alive and saving you. Not the case anymore, right? The only way you can get more performance today is by specialized hardware. Running on specialized hardware means a lot more stress on the compiler writer so that people can actually get their job done. And these are the problems that be in the Julia community and via Julia computing are solving today. I had a fun demo to show about some new floating point precision types that are used in machine learning and TPUs, but I think I have to sort of keep moving forward. All right, so you might wonder, given how few hands went up in the audience that who's really using Julia, right? Like maybe no one's using this thing. And so this is some data here. Julia's actually, so this is a data of GitHub stars of the Julia language. And it comes from all across the world, but I'd like to point out that actually a lot of Julia contributions come right here from Bangalore and we have a thriving community of contributors right here in Bangalore itself. And this is why Julia computing one of the firms, the firm that I founded for professionalizing Julia actually has its largest office right here in Bangalore. That inflection point right there is Julia 1.0. So Julia was seeing steady growth all these years ever since it came out and then it just shot up when we announced Julia 1.0. We actually had a party at a JuliaCon last year about a year ago and we actually merged the pull requests that bumped up the version number with everyone in the audience and so that was fantastic. Lots of books if you're interested in learning about all the topics that I talked about further. These are Julia specific books and I would highly recommend this Think Julia book if you're a beginner and you're beginning to learn some of these new things. This book actually is available in an Indian edition so you can actually buy it locally at the Indian prices that books put out for. Excuse me. Julia is now being taught in a number of universities so these are just some of the universities teaching Julia and this is a fraction of the actual number teaching out there. Increasingly many IITs and triple IIT Delhi now are teaching Julia in India. I should also point out that we participate in the Google summer of code and the Julia season of contributions which runs simultaneously over the summer and the largest number of students joining GSOC in the Julia community and GSOC are actually from Indian universities. This was JuliaCon at Bangalore just a couple of weeks ago and the conference keeps growing in size. This time it was big enough that we actually had to get a drone pilot to take a picture from above because we couldn't otherwise get a picture of everyone in there. So I'm going to stop right here with my final slide. This is everything that I spoke about is what we do at Julia computing. We contribute to the open source ecosystem. We push on the innovation and we build the products like Julia protein run and training in order to make it possible for companies around the world and for universities and for individuals to learn Julia, to be comfortable with it and to actually bring it into your companies, into your production problems. And so we are there to hold everyone's hand should you need to, but please download Julia, try it out and join us on our discourse or the Julia Slack or one of the various forums online. Thank you for that. That's my last slide. Thank you, Viril. So it was an exciting talk. One question I do have is related to the TPU and the GPU part. Did you find some major difference when you ran on the TPU as compared to GPU? Oh, there is a huge difference, right? So the TPU is, you know, while the TPUs are extremely widely used within Google, outside of Google, they have not found much use except in a couple of places. And it didn't take us a lot of effort to target Julia to use TPUs, but in order to actually use TPUs for a real problem, there is a huge amount of tooling and compiler work that Google needs to do in order to make it easy for, you know, compiler writers like us and users such as yourself. So performance-wise, it has the theoretical possibility of giving you everything and Google uses it internally, but the gap between their internal use and what's possible externally is tremendous. NVIDIA, on the other hand, has no business using their own GPUs for their own business. You know, they're meant to be sold and used by everyone else, and there's been a large ecosystem of users over the last 10 to 20 years, and it's a fairly mature platform now. Not as mature as the CPUs. I mean, it's quite rough, but by and large, you know, we have a robust compiler that actually does what you want to do and it's much easier to work with. Hi. So you talked about Julia plus TPUs and GPUs. So how good is Julia for validating a normal idea? Like, Python is very good at validating quickly. How good is Julia on that? Sorry, Python's very good at what? Validating my idea in a short program or something. Yeah, so Python is very good at validating your idea. So is Julia. So Julia's syntax is roughly very much like Python, a typical size of a program that you write in Julia would not be very different from what you write in Python. The language is actually interactive. And, you know, just so you start out on a terminal, get the prompt, and you can, you know, work interactively, plot your data, do all that stuff, but at great speed. I guess we can just take one last question. Sorry, where is this? Hello, thanks for the talk. My name is Ranga. When your talk started, it started with an interesting question of is Python the best language for machine learning, and you showed the code by Yann Le Coon and all that. So I started out thinking that, OK, Julia grew as a need to support better models in AI and machine learning. But later on, I think you showed a slide by finance problems and even some of the books that are. Julia's very good for scientific computation problems, high performance computing, scientific computing. So my question is, how did it happen? Like, was there, it was more, Julia was developed as a need for solving the scientific computation, and now we realize that, hey, AI is doing a great job, so let's apply it, or it's the other way. Like you look for a better solution for AI, and then you realize, well, now I can use it for all these other problems, which I probably didn't think of earlier. That's a very good question, yeah. So what came first, right? And I think this, my final slide here is actually the answer, right? So the, Julia started out as a general purpose language to do just about anything, so you could even write a website in Julia, or a web server, or a distributed system, or a Bitcoin miner, or whatever it is that you want. But what the abstractions in Julia were designed for is to do all forms of numerical computing. And that was really the fundamental basis with which we started. And then as the years went by, we started seeing application after application. And then as machine learning became popular, for us, it was just another form of numerical computing where the abstractions that we had built just naturally applied, except that, like I showed on one of my later slides, that machine learning puts an enormous amount of stress due to the nature and challenges that it brings. And so we have put a disproportionate amount of time and energy into making machine learning better in Julia. But what we really hope is not machine learning and scientific computing being two separate worlds as they've been, but coming together so that you can bring the intuition of science and the applicability of data and machine learning together. And I think this quote actually by Guy Steele captures it very well. I think it was made in the context of Java, but I wouldn't be 100% sure about that. And he said that the main goal in designing a language should be to plan for growth. The language must start small and grow at the set of users' growth. And with machine learning, everything you heard me say has been exactly that kind of growth of the language itself. I don't think we'll be able to take more questions. I guess we're just going to be around. So thank you so much. I'll be around. Thank you.