 I am going to be talking about Haskell in production, so just before we get started how many of you have actually used Haskell before, how many of you have used it at work for doing something or have put something into production with it that is great one, okay awesome. So that is you guys looking at me and Bumshi in skepticism, I am Tanmay that is Bumshi he will be doing a part of the talk as well. The goal of this talk is to give you a brief overview of what the Haskell ecosystem is like and to definitely to convince you to give Haskell a shot for work stuff. The idea is not to sell Haskell to you as a language that is something that you can do on your own time, but here if you are worried about a certain kind of production a certain production issues maybe we can try to address that little bit so that you can definitely give it a shot. So our journey started about a year and a half ago and what attracted us to Haskell was that a lot of big claims were made by the Haskell community in general and the everybody who used Haskell talked a lot about Haskell and in hindsight that is probably one of the dumbest reasons to choose a language and go into production with it, but hey at the time we were young and foolish and then we decided to go with it, to give you a little bit of background about what we did with Haskell over the last one and a half years just so we can put the kind of stuff that we're talking about more in context and also for a little bit of shamelessness for the sake. We're Hasura and the idea is that we're building this new kind of backend platform as a service, backend as a service thing that doesn't have lock-ins, but it sort of gives you components which are ready to use and these components are microservices. The idea is to give you a bunch of components that you can use for building these components which are like say database and search that you need for an application. We used Haskell for solving a bunch of core problems, right? The first core problem is exposing a nice interface to you over or for rather for Postgres, but over JSON, right? The idea is to give you a JSON query language that you can use from your client and you can execute SQL against the database, but from the client directly. That means that you need to compile these kinds of things. You need to understand, select, insert, update queries. You need to understand permissions, otherwise obviously it's not safe to contact the database directly. You need to understand relationships, et cetera. Another part of what we implemented was a programmable gateway, very similar to Nginx, but a more programmable Nginx, right? Very similar to an API gateway. Maybe you've heard of it. So before we delve deeper into Haskell, just to give you a quick idea of what Haskell looks like. So first, Haskell is obviously functional. I'm preaching to the core here, but that's a small function that uppercase is every character in a string. So I map the two upper functions on a string, all right? The idea is Haskell is also type. So this is the way of specifying a type signature. So I say some is a function that takes as input an array of integers and outputs a single integer, right? Haskell or other GSA also does a lot of type inference for you. And what type inference means is that you can write, you can declare a variable equal to hello, which is a string without saying string a equal to hello, which is what you would do in typically type languages. You would say string a equal to hello, right? Haskell is also lazy, which means that values are computed only on demand. Things are evaluated only on demand. So this is a very typical example of lazy evaluation. This is an infinite list, right? So I'm saying take five elements from the infinite list, right? If this was not lazily evaluated to execute the function take, the runtime would try to evaluate five and would try to evaluate the infinite list. And the runtime would fail because it's an infinite list, right? No side effects means no global variables. And when you're writing code from inside the code, there is you need to separate out impure code. We're going to a little bit of detail if that's relevant later. But the idea is that for example, if you want to append, if you need to append to a list, if you need to append an element a to a list of a's, the output that you will get is another list of a's. You cannot get the same array as an output or other. You can't give it two arguments and expect that a global variable which is a list of arrays is modified, right? Okay, cool. Now when we when we say in production, what what do we mean by that? The typical things that we required in production was code should be robust no matter what framework or language or whatever I use. I want to be able to write robust code. I want to be able to iterate quickly, maintain that code base easily. I want the code to have a certain level of performance, whatever I require, a certain amount of tooling around the entire system and the deployment should be easy. That's what I need. And the idea is to explore how the Haskell ecosystem stacks up against these kind of loose metrics, right? So robustness and safety, which we'll talk about. Hello. Yeah, this is a typical joke that runs along. Wherever you wherever Haskell is mentioned. So what is robustness when it comes to software? Isn't it a vague term to throw around when you're discussing software? So I would rather take an example like GCC. When you compile your C code with GCC, you expect you don't expect GCC to crash. You don't expect GCC to set fault. You don't expect GCC to generate code which could set fault because there is a bug in the GCC code generation, and would you consider Firefox, Robust, for example, if you're using some flash plugin, why is it crashing my browser? So you see that even with a major organization like Mozilla, Robust software is a hard deal. It's a hard thing to achieve. So yeah, I don't have solid metrics for GCC, but if you look at SQLite, it has 94.2,000 lines of code and has about 1000 times more test code. So how do you achieve this kind of robustness 1000 times more test code than the actual code itself? So that requires enormous amount of engineering resources. And essentially it's time consuming and costly, right? So if you look at robustness in a different way, what what are the properties that can contribute to robust software? Something that's being correct, and the errors are handled well. And basically a lot of testing that's under that's undergone. Okay, so when it comes to correctness of any code, it's hard to reason about whether it's an imperative language, well, it's a functional language, that's property of the code rather not the property of the language and it's hard to reason about correctness. But if the language could provide me some features where I can efficiently translate my ideas of solving a problem into actual code, that would make my life easier, right? Let's say for a simple example, if I want to sum the elements of an array, right? In Haskell, I would do something called fold. I'm essentially folding the entire array, adding one each time I'm doing that and I'm starting with an initial state of zero. So that's my sum. But if you're doing that, for example, in an imperative language, you write it through the array and add some add each element to the sum variable. But where is this coming from? Why? Where is the iterator thing coming from? I just want to go over the list of values and want to sum them. So there are too many moving parts on this and immutability. This is another simple example. You have a list of strings. You want to filter out some elements of that list. And the condition being, if the string is an empty string, you need to remove it out of the array or list. So in Haskell, you would do something like filter when it's not empty, keep that element. But when it comes to, for example, this is a piece of Java code. What you're saying is that, okay, for every object in the list, if some condition is valid, remove that element from that list. But this is wrong because you're modifying a list in place when you're iterating. So it's usually hard to spot these errors. So this is essentially a wrong. So what's the right approach is to use something more complicated. You need to have understanding of what iterators are. And this question on Stack Overflow has more than 500 votes. So you understand the, even for simple operations, sometimes there are too many things you need to worry about. And isolation. So whenever you're, there's always a part of your program that's interfacing that's exposed to the external world. And there's part of the program that takes input from that external source and converts it into something. And that's, again, given back to the outside world. So if we look at the system, we can separate out the code as something which is pure and something which is impure. Impure is the code that's dealing with the external world. And pure code is something that's deterministic. So no matter, no matter you call the function 100 times with the same arguments, it should return you the same result. So that's pure and impure code. So Tanmay will talk more about this, but that's a general idea. And when it comes to error handling, so having a static type system in any language will, will avoid several classes of errors. For example, you're calling a function with the wrong number, wrong number of arguments. And you are passing the wrong type to a function. So all these things are easily handled by the type system. And the other error handling mechanism, many languages provide is the typical through and catch. So if there are exceptions, you throw exceptions, you catch them at the right location, you propagate those exceptions. But Haskell is unique in this regard. So it lets you use its type system to define your own, own application errors. So I'll, I'll take you through examples of this. So I'm, I'm, I'm defining a new type called maybe. I'm saying that maybe something is either a value or it's nothing. So given this, my lookup function, lookup function takes a key and it takes a hash map. And it gives me a value. Typically, what, what's the return type of a function in, of such a function in Java, it will be the value. It's not maybe value. What essentially the type system is capturing is that you may not find the key in the hash map. So there's a chance of failure. So that's essentially captured here. So if you can't find the key in the hash map, the lookup function will give you nothing. If it could find, if it could find the key, it will give you just something. So this is essentially the type system is enforcing use, enforcing some constraints. So now if I, if I looked up for a key in a hash map, and I'm trying to use that value as a normal value, the type system would not allow me to do that. It would say that it's maybe V and not V. So you need to handle that. Yeah. And if you want to give out more information when you're, when an error happens, for example, Haskell defines a simple type called either A or B. What it essentially says that the value that's being returned can be either left or right. So essentially the convention is that left is usually marked for errors and right is the actual value. So where can we, so where would this type make sense? Let's say you're getting a, you have a JSON string or a JSON file. You're trying to parse that file. Let's say string be the JSON string and you're trying to parse it. What this type signature is saying is that there could be a failure. And I'm returning that failure as a string to you. And if there is no failure, I'll give you a JSON value. So essentially you're capturing different kinds of errors. And, okay. So let's consider a simple use case. Let's say I'm writing a library to interface with Postgres. I'm trying to give you a function which will take an SQL statement, which is essentially string here. And it will either give you a result of that SQL statement or give you an error. But the, I'm also giving you one more feature. What I'm telling you is that when you use this function, exec sql, it runs it in a transaction. And if the transaction fails or if the transaction is not committed because of a conflict, I'll rerun it. So that's a higher level abstraction than simply executing an SQL statement. So that's what, this is the function I would like to give it for you. And this is what I have. So I have a low level function which is exposed by the libdq library of Postgres. It says that it can take a string, which is a SQL statement, and it will give me something called db result. And what essentially a db result is, it could be an error, it could be the actual value of that SQL statement, or it can be a conflict. So now if I have this function and this is the type of the result, can I provide something like this? A higher level abstraction over this. So this is a function which I'm implementing. Exec sql, in case of a result, I'm saying, okay, it's a valid result. So I'm giving you right a. In case of an error, oh, it's an error. So I'm giving you, I'm wrapping that in a left value. In case of a conflict, what I'm trying to do is, I'm trying to execute this again. I'm retrying the statement till it's executed. So what, what have we achieved by this? We've taken something low level, which is exposing too many error, too many errors that are not really comprehensible to the end user. So you, as a library user, would like to just give it an SQL statement and want that, want the result. And if it fails, you would like that statement to be retried. So that's the functionality I'm trying to provide. And I've achieved that by masking these errors that are thrown by the, that are thrown by the low level LibPQ library. So you might wonder why can't I achieve this same thing in any other language. You can probably, but it's usually a bit more error prone. What you would do is, the exec sql state, the lip, exec LibPQ function would throw exceptions in a, let's say in language like Java. What if you forward to capture an exception? That exception will be propagated to the exec SQL thing. And a user is not expecting that exception to be thrown. The user is expecting, the user is never expecting a transaction conflict. The user is only expecting an error in his SQL statement. So what we have essentially achieved is that we have localized the errors. So when, when some errors make sense at a level, they are captured at that level. And we have made sure that errors are propagated in the right way. And when it comes to testing code, you have regular unit tests that's every, which every language provides. And then there is property based testing. How many of you have heard about property based testing? Okay, let's take a simple example. This is a quick sort function, or not rather, it's a function which sorts. Okay, let's not bother about the implementation. Now I want to think about certain properties, this function holds. So the first property is that when I sort an already sorted list, it should be the same. So this is an idempotent property. And what I'm saying is that this is the property I want to be verified. And QuickCheck is a library that can take properties like this, generate random tests, and see as whether this thing passed in most of the test cases. But QuickCheck is quite smart. It essentially tries to find a test case, which can nullify, which can say that this property does not hold. And what it can also do is it can, if another of say, 10,000 elements generated an error, it can compress that to an array of maybe 10 elements. It tries to give you a minimal test case which can reproduce the same error. And this is another property that holds of sort. So the minimum, if the list is not empty, the first element of the list should be the minimum element of that list. And there's one more interesting thing. You can always compare your new implementation with the existing implementations. Let's say you have discovered a new efficient way of sorting. And you want to see whether it's actually correct. Most of the, you can sort is a function that's implemented in the standard library. And you're almost assured that sort never goes wrong. So you're comparing with your implementation of QuickSort and asking it to generate random tests to see where the output doesn't match. Just to, just to add to what Vamshi was saying, to what Vamshi was saying in the testing thing. This is actually, this might seem very useless because you're like, oh, how many times in my code do I actually write, do I actually write a sort function that I need to test, right? But, but there is this thing I will talk about when you, in Haskell you're encouraged to write code that operates purely on values very separately from code that interacts with the real world. And hence property based testing start making a lot of sense. This, this bit, right, where you're comparing against, where you're comparing your implementation against the standard implementation to check if it's useful is another very important thing because what you can now do is that the first time you write your application, you can write it with a very nice kind of very simple thing that you can see is correct, right? And that you're confident is working correctly. Now you can write a more efficient version of that or a version of that that does a lot more handles more performance edge cases or whatever. And then you can compare that as a reference implementation. So these, these kinds of things actually become very useful. I'll talk about them more here. When you, when you, when you have a large code base, especially introduction, you need things to iterate quickly and you need to be able to maintain that code, right? And the two most important things are I need to be able to architect, implement and propagate changes in my code base easily. I also need to not break existing code, right? If I make a change here, I need some sort of a strong guarantee that code is not breaking somewhere else. This, this is, this is the reason why I guess a lot of people really, really, really like Java. It helps you write, it helps you write this kind of code. And then when you're maintaining it, it stays correct, right? It's not like Python. If you haven't tested one edge case out, suddenly your entire Django application is going to error saying expected string got into there, right? So that kind of stuff should never happen. So just to take you over a quick example of how the separation of concern bit and the functional bit add a lot of value to code is all right. So let's say I have a very simple function that I need to implement. And in Python that function would look like like to look at the top part. In Python, I have the function I want is really, really simple. What I really want to do is I have a list of objects. I want to transform those objects. And then I also want to print out a particular value inside that object, right? I want to print that out. And I want to return this transformed list of objects back to whoever called me. So this is a sort of idiomatic way of writing it in Python, which is very neat and it's very readable, right? Now what happens in Haskell, right? So what happens in Haskell is the first cut implementation that you will write, right? You can't actually do IO inside your function. And that's very irritating when you start off with it. So let's look at the first cut implementation that I want. I want to transform all my objects, transform all is a function that is actually mapped it's a currying thing which everybody would be used to. I say map transform, right? So the idea is I said take my objects and apply this function called transform all on them. What is transform all? Transform all is a function that maps a function called transform. So transform is a function that operates on a single object. I map that function to operate on a list of objects and I get a transform all function. But I can't print out values also, right? That's painful. So you get a more painful version in Haskell, which is written like this. So you say this is a little irritating to write. But what you write is you now say a different kind of map. You use map M when you want to map over something that does IO. Not correct, but let me go with that, right? So I said map a function called log value over my transform objects. What is log value? Well, log value is a function that is it prints. It's a composition. It's f dot g. It's print dot sum value. So log value itself is a function that will take a particular object, extract some value and print it. Map this function over all the transform functions and I will be able to print a list of values. How do I get transformed objects? Well, I get transformed objects the same way I was getting it earlier. Transform the object is equal to transform all my object. The same thing that remains earlier, right? What is transform all? Well, this is what it is. Now the interesting thing happens is that when typically you start refactoring this code, you want to change the way you want to change the way. Let's say the function log value is implemented, the function that is writing the file. Maybe you want to start writing it to the database or maybe the nature of the transformation itself is something that you want to change. In practice, what happens is this. I would like to look at this part. Right? My top level function still remains very clear, right? Even after refactoring, anybody who visits this code sees it very clearly. What does it do? Well, I map a log value function over my transformed objects. What are my transformed objects? Well, it comes through a function. What is log value? It's another function, right? My transformed objects again come from transform all. Transform all earlier was just a map on transform. Now it's doing some kind of weird averaging, normalizing something, something, something, right? The interesting thing is all my tests that I wrote initially all the way up here for checking if transform all is correct, will continue to work even in the latest implementation, right? That is, that is something that you have to really think about if you want to write code well in another programming language, which doesn't separate these concerns so well. Just to give you an example, again, of how that could start looking like in Python, is your code that is dealing with the IO, right? Your code that is, that is, that is dealing with pushing that table is, is sort of mixed with the code that is transforming the data, right? Ideally, I could have and should have written it away where it was separate, right? But it's easier to just do that in Haskell. And this kind of separation of IO adds a lot of ease, right? Okay, there are two also other big things that help in maintenance and iteration that are unique to Haskell even amongst other programming languages. One is type inference and one is laziness. Quickly to take you over what type inference is. So I want to write type code. You're writing code in Java. So you're writing type annotations everywhere. Now, programming languages like Java and Rust do something called local type inference. So what they do is inside that function block because you've given it a type signature, it knows that every single variable that you're using inside my function, it knows their types. And because it knows their types, it can tell you if your code is correct or not. That's what your compiler is. But Haskell because it does a lot of type inferencing, it's certainly not the best type inferencing engine on the planet, but it's one of the best that is sort of main thing. So, so I can still write code like Python without implementing and annotating types everywhere, but I still get that kind of type checking safety that I get in Java. Just to show you an example of what this might look like. All right. So, so here I have a very little function and all it does is it averages a bunch of numbers, right? Now this function is very similar to what it would look like in Python as well, right? I'm just I have an array of integers and I'm averaging over those array of integers. What's what's the interesting thing to come that when I do this, the compiler will fail, right? The compiler will fail because and the interesting thing that the GSC can do is it can reason about what's happening in your code. So, what it says, I see that you've defined a function called average. Now, average takes an argument called numbers and what you're doing is you're taking the sum of the numbers, you're applying a sum function and you're dividing it by a length function. The sum function I know because it was included in the standard library, the sum function operates on a list of numeric objects or a list of integers and I know that the length function operates on an array. I don't care what kind of array but it operates on an array. Hence, I know that your average function, the argument numbers must be an array and it must also be an array of integers, right? And hence when I try to compile this main code, this part is going to fail because you've not given me an array of integers, you've given me an array of tuples, right? This simple thing makes it really easy to quickly iterate on code, right? Because you have a bit of working code, you quickly write a small function, you ask your compiler to type check it, the compiler type checks, you don't need to annotate types, right? Because if you're annotating types, writing functions, annotating types, writing functions, things become a little painful. That is one style of development. Another thing that now Haskell encourages something called type driven development. All right. Okay. So, so very similar to or rather not similar to at all, but analogous to test driven development, what type driven development says is, why don't you write your entire application? First, think about your entire application, think about all the functions that you're using and write the type signatures for them, right? Then do whatever it takes, whatever it takes to get your program to compile first. Now your program's compiling. So you know that they are architecture of the entire application is solid. Haskell makes this easier by having something called undefined because it's lazily evaluated. You don't actually need to implement the function. You can say that I have an average function which takes a list of integers and gives me an integer, but I can write average equal to undefined. So I don't need to implement the function, but I can write the type signature. Now everybody who's using average knows the type signature of average, right? So you satisfy the type checker, but you don't really implement it too much. Then once your contracts are guaranteed throughout your application, you start implementing things one by one, right? This is a sort of ideal way of developing something, right? In a sense. Another thing that Haskell offers is laziness, which is a slightly subtle concept. I just hope it doesn't get blown out of the portion. But just to show you, okay, so just to show you first what laziness looks like fundamentally something like this, right? Just mixing composition, cutting a little bit. I have double me. It's a function that doubles things. I have a function called double list, which is just a map of double me. I have a function called quadrupler, which maps a double me dot double. So it maps double me applied twice on something that comes in. If I want to quadruple n elements of the list, I'll take two arguments list and an element n. I'll reuse the same function that I defined. And I'll define how I'm getting that list by doing a take n from that list. I hope this makes a little bit of sense. The interesting thing is this kind of thing became very easily possible because of lazy evaluation, right? This becomes rather hard to write here. So this is double me is easy. This double is also nice because of this comprehension. This quadruple me is also quite nice because of this comprehension. But here is where it gets a little irritating. The nature of data generation is deeply tied into the nature of processing your data, right? This is something that this kind of binding is very hard to prevent and separating that becomes very easy because of lazily evaluating things, right? I'll give you a tangible example of how this affects structuring your code in a very fundamental way. Okay. So I have a function. This is Haskell. I have a function that takes an argument. If condition one is met, then an expensive calculation is run. Else another cheap calculation is run. What is condition one? Where condition one is argument greater than zero. What is expensive calculation? Many operations happen. And then again many operations happen. What is the cheap calculation? Some cheap operation or not? I get very readable code. And I've defined what that code is somewhere else, right? Let's see how this would look like. Again, this is a trivial example of your accuracy code where this doesn't really work. But how you would typically say end up writing this in Python. I'm not assuming if you wanted to architect it well from scratch. You can do the same thing in any language. I'm just saying what kind of practice is encouraged. If you wrote it in Python, then you'd say, okay, I have a function. If are greater than zero, then many operations happen. Else, cheap operations happen. All right. This is what you would write natural to write. I would love writing the code in this way because the algorithm is so neat, right? I look at the function and I know exactly what it's doing. Okay, you know what? I don't care about this. This is just generating the data. The crux of this function is this. If condition one, something expensive, else, cheap thing. But I cannot do this here. Why? Does anybody know why this is a dumb, really super idea? Writing code like this? Would always be run, right? And the advantage that you get in Haskell or what it does is that it doesn't care about the order of evaluation. So what's happening here is that some expensive calculation is never really computed. If condition one is not met, this never gets computed. And so you can afford to write code like this, right? And this is kind of subtle, but when you start writing a lot of code and it gets structured like this, right? You don't even think about laziness, but laziness allows these kinds of things to happen. It sort of really matters. So okay, let me also talk a little bit about the comments. Okay, so generally Haskell has fairly high performance compared to other languages at that kind of a high level because it's compiled and because it's statically typed. And also because of fundamental things like immutability and differential transparency, the compiler can do a lot more to optimize their code. Things like memoization of function values, function outputs, if they're not otherwise possible. The Gabriel Gonzalez, the guy who's written a bunch of really, really amazing libraries for Haskell, I think works at Twitter as well. His high level unqualified sort of statement that he makes is Haskell is sort of comparable to Java in terms of performance, both written by a noob and written by an expert. For different reasons. This is a fairly central statement. To make an example that I would like to give you of where GSC's runtime really shines for performance, right? So this is a benchmark of a simple web server, right? The blue line is nginx. The dotted line is something called work. This is number of cores, rather number of workups, right? As the number of cores increase, right, you see nginx speeding up. Whereas this fellow goes up linearly for as long as possible. Well, I think it tops out the 32 cores, right? How does this happen? What is letting Haskell do this? So fundamentally, GSC brings in two things. You think the node is known for its performance, right? And the kind of performance that node is known for is basically because it makes your IO asynchronous, right? That is why everything is in a callback kind of thing. You want to read a file. So you do file dot read function, what to do if the file is read, right? That's the style of doing things in node. This is good because node underneath it has this kind of event engine that says once the file is read, I'll call some function for you, right? That's the way that's the way node does that. So you can run hundreds of functions. And if they're all doing IO, code will keep executing. Nothing will block on disk and wait for the file to be read. That's how node is good. But node is still not multi-core yet. It's getting there gradually. But it's still single threaded, single OS threaded, right? It pretends to give you infinite threads or sort of infinite processes via callbacks on top. That's great. But it's still one OS thread. What GSC says is I'll give you a model of forking a thread. When you fork a thread, I will decide whether it maps to a real thread or whether it's a lightweight thread that I manage in my own event engine, right? So it sort of takes at a very high and incorrect level, merges node and what the event system of node is and it merges the sort of threaded runtime of Java puts it together. And then you get, you can run millions and millions and millions of lightweight threads without a callback programming model. So in your head, you still write the response is equal to request.get something, right? And this is as efficient, if not more, than nodes request.get and a callback to handle the response, right? Now, because again, because Haskell has compiled interoperability with languages is great and that's really important if you're looking at seriously driving performance. The two kinds of things that we explored critically were C and lower. So for writing our API gateway, when we needed more performance juice while talking to the network and fetching requests and doing proxy, we interfaced with libcurl directly and that became, that is easy because ffying with C is quite easy. Ffying with Lua is also easy but that's because Lua is meant to be, Lua is a absolutely great language that you can easily embed into your code. And if you guys have applications where you want programmable configuration, you don't want to, you don't want to make the user specify configurations just as a JSON file but you want to let him program with it. You guys should look at Lua. So Lua has this kind of stack-based communication which is absolutely beautiful and plays really well with Haskell, probably plays well with every statically-type language. Bumzy, you like to talk about tooling pictures? Yeah, okay. So I'm going to skip the tooling bit. The good thing about the tooling is generally Haskell has something called Cabal. Cabal has something called Cabal Sandboxes which are very similar to Python's virtual end. So that makes life very easy with the whole package management. I'm downloading 10 packages, I'm running code. There is, the tooling is, the tooling around profiling and benchmarking is also very good but benchmarking is phenomenal. Profiling is not great because you need to recompile everything but once your base libraries are also ready to be profiled, your profiling output is absolutely, absolutely great because again, remember that your entire application is this top-down functional architecture, right? Functionally-architected piece of code. So the profiling output annotates each function with the cost center, right? So you're immediately running your program and you're saying, oh, okay, hey, 90% of the time is in this function. Optimize, whatever, whatever. We got our first version of our gateway from 20 requests per second up to 10,000 just profiling, profiling, profiling, right? Within our span of a week. Now, deployment, again, is generally nice if your libraries are statically linked. What that means is that you have a .so file and a .a file, typically. If you have the .a files as well, you can compile the whole thing into a gigantic binary. So you'll get this 100 MB binary that runs on every single system. But there are some libraries which are not statically compiled, things like libpq, the Postgres interfacing thing, that becomes painful. But then Docker, right? So Docker does this very well. What it does is, I mean, everybody's used Docker? Oh, okay, all right. So, yeah, there you go. All right, okay. So just to sort of give you a more high level idea of what, if you're really thinking about putting Haskell to use at work and building a team around it and what not is, then I guess you should be aware of things like this. The community is very welcoming. You'll find them typically on Reddit and IRC and Stack Overflow. IRC Stack Overflow, they're very, very responsive and great. Finding Haskellers, you don't, right? So you don't, like, in Chennai, I think there's like, Kamshi and me, now Akshay, like, single digit number of Haskellers, right? Oh, okay. Single digit, right? So it's going, it's hard, right? It's kind of hard to find Haskellers. Also, because there is a bit of a learning curve and the learning curve is just because you need to be dedicated to it, to a new way of thinking. And once you get used to it, everything is great. But you will find people online. We in fact had a guy who wrote one of the early, very well-performing libraries. I think one of the fastest, concurrent Q libraries in the world, where even under contention, the Q-ing throughput is in the order of nanoseconds, hundreds of nanoseconds. And this guy, we found him online and he liked what he was doing. So he came down with us, stayed for a month. And that is good because the community is still young. So you'll find people like that to collaborate with you. That should not be an issue. And even the big wigs of the community, guys who originally wrote the Haskell compiler back in 90, whatever, those guys are also quite approachable, so it's all good. Now, to roughly sum up what the whole status of Haskell looks like is, it's still immature in these areas. Cross-platform with Windows is still quite buggy. On Mac and Linux, it's been great in our experience and with Docker, it's been pretty easy. Education is bad because there are lots of beginner resources, but the number of intermediate resources are a little low, right? So if you're a good functional programmer and you really want to get to grips with it and do something, the resources are not that many. It's also not very good currently for distributed programming. It's nowhere close to a length, not even close. But concurrency, it is the best in class. Single machine consensus, right? Game programming, don't even think about it. Numerical programming, also though, wait till libraries come out. Hot code loading, IDE, front-end web programming are also still known as. But web servers, compilers, absolute best. You want to write a compiler, you should not do it in any other language other than Haskell. You should learn Haskell and then write a compiler, right? Parsing, writing DSLs, again, best in class. Stream processing, the whole lazy model or whatever works really, really well. In the Haskell environment, you can stream input and output very well. It's easy to write for systems. Great libraries out there, maintenance and types of interesting I took you over. All right, I think we'll find out from our side if you have any questions in there. Any more specific examples? Yeah, yeah, sure. But at the same time, I think though, when you think about writing things as terms of type of contract, right? That's your proof, right there, almost. Most of it is right there, right? Especially because this idea of impure IO when I first heard it sounded like more of a pain in the ass than for health, a helpful thing. But it's actually very important because this idea of having no side effects like I think she mentioned in the morning, right? The idea of having no side effect is very, very nice. The same function, the compiler can guarantee it. It's not you guaranteeing it. It's not like my function looks pretty. So the same input will give you the same output. The compiler is guaranteeing it to you that for the same values, you will always get the same values of the output. That's insane. That is literally a little insane because I mean, that's your proof right there, right? So in that sense, it helps writing in, like NumPy and all that. There are libraries emerging, I think this frame. Sure, you can. But I mean, why go through that thing? You'd rather use Python which exercise with C for NumPy to get the same benefit, right? It's not great yet, if you get there. Any other questions? All right, okay, guys. So what I would also like to keep in mind is that, all right, okay. So if you guys are, there is a nascent Bangalore Haskell community. We've just moved to Bangalore last week. So we'll be plugging into that gradually. But meanwhile, if you guys are seriously considering writing stuff in Haskell and you need help, help to resources or help with hacking at Haskell, you can touch with us. You should also check out the Bangalore Haskell books. I think there's just a theme usually all the way. So that will help you out. And of course, we are hiring, yeah? So if you guys do really want to use Haskell or do some serious functional systems kind of programming, as well actually a lot of system programming as well, which we've been touch upon, then you should join us, right? Okay, very exciting. Thank you guys.