 So a bit about me. So my name is Rahul. I go by the handle missing factor on internet. I work for ThoughtWorks. For past five years, I've been dabbling into multiple programming languages, Scala, which I worked with professionally for three years. Then I also dabbled into Haskell, Clojure, and even some obscure ones like Factor. So before I move ahead, I would like to survey my audience and see what is the distribution of various communities that we have today. So how many of you have programmed in functional languages before? Please raise your hands. OK. How many of you believe that functional paradigm is clearly superior to object-oriented paradigm? OK. And how many of you believe that it's the other way around? All right. Only one guy. And how many of you believe that multi-paradigm languages is the way to go? OK. So I guess we have a good distribution of all kind of people. So this should be a lot of fun. So this is the situation today. And we see these kind of statements floating around all the time. Some people will claim Fp is superior to object-oriented. Some people claim that O is the natural way of thinking. Some will claim O is a passe, and so on. We hear these kind of statements all the time. And this is our regular programmer, who is completely confused, what to believe, and which of this is true, right? Something has to be true. So if you are hoping that by the end of this talk, I'm going to answer the question, which one is better or whether that combination is better, I have to disappoint you. I will not be answering that question for you. My goal, rather, is to leave you with even more questions and with more meaningful questions. And yes. So in this talk, I'm going to try and tear apart the notions you may have about paradigms, function programming, and object-oriented programming. Essentially, my goal is to piece off everyone. O people, Fp people. So yeah, let the feathery fling begin. So this word paradigm is very popular with programmers, for some reason. And this is a dictionary definition of paradigm. It says, a framework containing basic assumptions, ways of thinking, and methodology, accepted by a community. Essentially, it defines a school of thought. And as should be written from the definition, this term is very vague. It's a vague attempt at categorization of a school of thought. So this term, even though it is useful in certain contexts, I'm going to argue it's not very useful in the context of software. The reason being that the paradigm, the term itself, has a vagueness to it. So if you use the term, while keeping in mind the inherent vagueness to the term, it's all fine. But if we look at the software landscape today, what you will notice is that paradigms are treated as though there is a clear boundary between them. This technique is the Fp technique. This is the OO technique. Partial application belongs to Fp. Subtyping belongs to OO. And so on. So these so-called paradigms are not really disjoint schools of thought. And that's the point I'm going to drive through this talk. What I have seen is that the whole notion of paradigm leads to unnecessary rivalry among these camps. So according to me, the term paradigm, when used in the context of software engineering, hurts more than it helps. And lately, and I'm not the only one with this thought, greatly many respected programming languages researchers are shifting to an opinion. Opinion is shifting to a thought that paradigms should be abandoned. So here is an excerpt from an abstract that Mr. Sriyam Krishnamurti, a very well-known programming language researcher, submitted for some course. What he says is that programming language paradigms are a moribund and tedious legacy of a bygone age. Modern language designers pay them no respect. So why do we slavishly adhere to them? Here is Mr. Slava Pesto, creator of Factor Programming Language. I have picked this statement from one of his blog posts, where he says labors like OO and Functional have so many conflicting interpretations that they are almost totally devoid of meaning. So in that blog post, he goes on to further claim that unless any new programming language, unless it's a complete copy of some predecessor, it cannot be judiciously categorized into any of the existing paradigms. Even this fake philosopher, Mr. PLT Allende Botan, does not like the word paradigm. What he says is that PL paradigms do not really exist. Only features, techniques, and idioms grouped in different ways. Draw up your paradigm and be free. So that's what they really are. Features, techniques, and idioms roughly grouped into certain buckets, nothing more than that. If you did them as anything more than that, it's going to be harmful. And according to me, paradigms are a computer science equivalent of tribalism. The reason being the differences that you see among these paradigms are typically more cultural or sociopolitical than technically really. So that's a really big statement. And I'm going to run you through a series of examples to show you how this can be true. So if you take an overview of functional ideas or the ideas labeled as functional and the ideas labeled as object-oriented, what you will notice is that many of these ideas, right, they are complementary in some cases. In some cases, the ideas are common to both camps. And sometimes there will be some impedance mismatch, but those cases are fewer than you may think. So there is a lot of room for cross-pollination here. Many ideas could be brought from various different so-called camps, brought together, and that could lead us to better abstractions. So that's an assertion I'm going to make. If we forget paradigms and admit interesting and useful ideas, it will lead us to better abstractions and better programming languages. So I'm going to show you some examples for that. And before moving to that, let's talk about this a bit. So what are today's two most popular paradigms, right? Object-oriented and functional, correct? Now, to have any meaningful discussion about these two, we must define these terms. But if you notice all the object-oriented oriented programming languages, you take all of them together and take an intersection of their feature sets. And what you will notice is that you get a null set. There is not a single feature common to all the object-oriented programming languages. You tell me one feature and I can tell you a language, which does not have that, and calls itself object-oriented. Therefore, I'm defining object-oriented programming in our context to accommodate the more mainstream object-oriented programming languages. So this is the first definition of objects. Objects serve as first-class modules. So this is the most minimal definition given by a well-known researcher, William Cook. So what he means by that? The term sounds really fancy, but all it means is that when you create a class, like, say, point, and then you create an object of this class, p equal to new point, you can say p dot x, right? Where x refers to one of its variables. So this object acts as a namespace for its fields and its methods, in which sense, it's like a first-class module. And Cook claims that this is a sufficient definition. However, there are some researchers who do not really agree with that. So I included some more. The next feature, which I think is essential to objects, is self-recursion. Again, it sounds like a very fancy term, but all it means is this reference, this or self, that we know from all the object-oriented languages, I presume. So in programming language theories, this is represented by Greek character mu. So if there are any functional programmers here who thought that they are the only ones with cool Greek characters, boo. The next feature, which I think is essential to object orientation, is subtyping. So for the purpose of our talk, we can think of subtyping as subclassing. Even though that's not an accurate description, we can go with that. And the next is credits or classes. These essentially serve as templates for your objects. Again, not really essential to O, but since they are part of pretty much every object-oriented language, the mainstream ones, I have included those. Next, let's try and define functional programming. To be honest, I had even harder time defining this. So we'll see why. So the most standard feature of functional programming is first-class functions, right? What do we mean by first-class? So the term first-class function was, I believe was first used by a guy called Christopher Starchy. And it's really difficult to define this term really precisely. So we'll run with the definition which fits in our context, which is the function in your language has the highest rates, okay? They enjoy all the rates that the regular values do. So you should be able to pass a function to another function. You should be able to return a function from a function. You should be able to store a function in a data structure and so on. And this gives rise to a higher order functions, which are essentially functions that accept functions or return functions. Then a function composition in which you combine multiple functions in interesting ways to do something more interesting. The next is immutability. So in functional languages, there is a lot of emphasis on immutability. Now, I would claim that functional language, functional programming, when done with dynamic typing and when done with static typing is so different that for a canonical interpretation of paradigm, it constitutes a new paradigm altogether, which is why I've included another slide for this one. So the features I mentioned for functional programming still hold, but there are some new ones that come to the table. The key one is algebraic data types. We'll get to that later. The next one is type classes. Now type classes are not really present in all the type functional languages, but all of them have some equivalent of those. So I've included them. And lastly, equational reasoning, a way of reasoning very commonly employed in type functional languages. Now, my original statement, right? It was that if you forget paradigms and bring ideas together from various sources, it could lead to better abstractions and a better programming language. So as an example of that, I'm going to use Scala programming language. So here's the reason I chose Scala is because first, this is the language I'm most comfortable with. This is what I've used for the past three years. And secondly, because I believe that this is the language which realizes this goal very well. Okay, so I came across Scala to this talk of order scheme, which we delivered in back in 2006 at Google. At the time Scala, I don't think he was planning to see Scala being used by anyone. So it was a research project and what the theme of this talk was unification, okay? So if you can see in his slides, there are two chairs and there's a man sitting in between. What he said was it's a functional chair and there's an object-oriented chair and I'm a man who is sitting in between. What he tried through this language and some other languages that he created before this was to unify various ideas in a single whole, a simpler whole. So I came across this tweet sometime back which I think captures the spirit of Scala very well. If a grand unified theory of programming languages existed, its implementation would be called Scala. However, there also has been quite a bit of criticism of Scala. So this guy is a haskiller. He says if you like programming languages and food, he is a Kaleena equivalent of Scala. It's a vegetarian ham with a chicken flavor. So jokes apart, what Odoski wanted was to have a few constructs with which he could implement all these ideas. All these interesting ideas from who are labeled under, who are put under the functional label and those put under the O label. And these are the few orthogonal features that he chose for this implementation. By the way, this is one way of going about it. There are n number of ways and if the talk permits, I will point you to some others. So traits and classes for some reason. Objects, these are well known in the O world and implicit. This is something new and we'll get to that. Okay, so let's start the showcase of some examples where in Scala tries to bring bunch of ideas together and how it makes them better. Okay, so the first example that I have is a very simple one, functions as objects. So Scala supports first class functions, right? It's a functional language after all. It has to support them. But functions are still objects in Scala. How does it do that? It is simple. There are traits like function one, function two, function three, up to n, which are essentially interfaces, right? With one single method called apply. So what happens when you write a lambda like that? There is a function F I defined which takes two integers and adds them up. This de-sugars to this and those who are done Java can easily recognize this as an anonymous in a class instantiation when there is a method called apply and whatever right there goes in the body of apply. Right? So the point I am driving is that, sorry, one more. And at the type level, when you write a type like that in comma into it, this will de-sugar to this. Where function two is nothing but an interface with three type parameters. So the point I am driving is this, is that in Haskell or Kemal or F shop, functions are primitives, right? They are not composed of any other matter. They are the primitives. Scala takes another approach and makes them objects. So there has to be some advantage through this approach, right? So the first advantage is that you can treat data structures as functions. So when you talk about sequence, right? You can think of sequence as a function which grows from integer to the element type. Okay? Also, what is the canonical operation for sequence indexing? I ask sequence what lies at your third place? It gives me the element back. So you can view sequence as it's canonical function which is into a similarly set. What is the canonical operation for set? Containment test, right? Membership test. So you can think of set as a function that goes from A to Boolean. Then map maybe the simplest of all is a function from key to value. Correct? By the way, even closure does that. The next advantage is a functions as objects. So you can have your own data types which can act like functions, okay? This is a small advantage but helps really. So parser is nothing but something that takes an input and gives you a parsed user back, correct? Similarly, there's a labeled function which is a very simple interface. It just extends the function interface and adds one more abstract method to it. This helps when you're doing a lot of expression passing. Those are something called variance annotations. We can ignore them for the purpose of this talk. Okay, so here in just because functions were exposed as regular O interfaces, you are able to do all of these things which are not easily possible in other functional languages. The next example that I have is that of records and classes. So most type functional languages, in fact all functional languages will give you some sort of record mechanism. So for those of you who are not familiar with records, you can think of them like C structs, essentially data types without methods. Now when you define a record type in any functional language, you get a bunch of things for free, right? When you define data type, what you get is a structural quality checking, correct? Those fields are immutable. You get field accessors. Sometimes you can get string representations. Then you can use those data types in pattern matching, correct? There's a lot that you get for free when you define a record type. So in Scala, what we have is case classes. Case classes are equivalent to the immutable records in other functional languages. But as the name suggests, case classes are still classes. So what Scala does when you define a case class is, it generates all these methods for you. Equals, hash code, two string, copy, apply, and apply, etc. And I'm going to focus on two of these, copy and unapply in particular, and show you some differences with the records, okay? So here is a record update in Haskell. What I've done here, I defined a data type called A, which has two fields, A and B. I created a value of type A, A1, equal to A12. Now I want to change the value of field A to something else, some string. And this is the syntax that Haskell provides me for that. Open brace, whatever field you want to change, close brace, okay? What I want you to notice here is that record update is a special syntax in Haskell. A language-level syntax. And that's the case with most implementations, I guess. Okay, Malan and F sharp do the same thing. Now, this syntax is not used anywhere. And a good programming language will not have a syntax for something as specific as this is what I believe. And this is what Scala does in this case. So when you define a case class, compiler will generate this copy method for you. The implementation is pretty straightforward. All the arguments that appeal in Constructor also measure in the argument list of copy, correct? And the default values are the values that the object already holds. So here I create a value of type A called A1. And when I want to change the value of that field A, I just say A1.copy, A equal to some string. So what you should notice here is that copy is just a regular method, okay? And it's using default arguments and named arguments to enable this kind of syntax. Now, default arguments and named arguments are can be used anywhere else. They were not meant for this use case in particular, but they easily solve this use case and you do not have to put a special syntax in your language just for updating records, okay? So this shows how copy can employ the existing features, the copying mechanism to implement something which requires syntax in other languages. So I mentioned one other method called unapply. We'll come back to it later. Next, algebraic data types. So again, these are, these comprise of some types, broad types, exponents and more. Here I have a very simple algebraic data type definition called option. So data option A equal to some A none. That's an or. So when you define this data type, again, you get a bunch of things for free. So when you use a value of option A in a pattern match, compiler knows what are the possible cases, correct? So it can perform something called exhaustiveness checking. So it can tell you whether all the cases have been matched, whether there are any redundant cases, missing cases and things like that. And secondly, of course you can use pattern matches, correct? That's the only way of, only way to participate in the skill pattern matching. Now again, Scala will always try to avoid this sort of magic and open up whatever mechanism it is using to you as a programmer. So let's see how Scala does this. So this is the equivalent definition in Scala. So Scala will employ the existing class hierarchy to implement this concept of algebraic data types. One word you add here is called a sealed, right? There's a sealed word there. Without that, it's a regular inheritance, correct? With sealed, what you get is that the type system knows that all the extensions of this type option are in this compilation unit. You cannot extend this data type outside this compilation unit. Why is this beneficial? This allows compiler to know what are the possible cases, all the possible cases. And the exhaustiveness checking that you had in a skill can be enabled here as well. So if you use a data type, a value of type option in pattern matching, Scala can actually perform exhaustiveness checking and tell you all the redundant or missing cases, okay? And the next part is pattern matching. So as I mentioned before, pattern matching is not really magic in Scala. Or rather I would say it's a magic, but it's a magic which is available to you as well. So when you define a case class, Scala generates a method called unapply. What does it do? It takes a value of that type and deconstructs it into its components, okay? So in this case, who is deconstructed into in comment? The reason I have option here is you want to also indicate whether the matching happened or not. So some means matching happened, none means matching did not happen, okay? So this is what Scala does for you. And as we'll see, since this contract of unapply is open to you, you are free to define your own unapply. You do not have to always define algebraic data types to go through to benefit from pattern matching. Now, again, what are the benefits of implementing this as a regular class hierarchy over having algebraic data type as a concept on its own in the language? So here's the first advantage, according to me at least. So in Scala when you define an ADT, by ADT I mean algebraic data type, not abstract data types. I have defined one algebraic data type here, color choice, which has two cases, custom color or default. Now forget that first line, when I use that custom data constructor, this is how it looks. Background color equal to custom red. But what does custom mean, right? So my namespace has been polluted with this word called custom. There's no way for me to know what custom really denotes unless I look at the type, correct? In Scala, this is a very easy to solve problem. What you will do is, you will have a companion object of that super data and put your data constructors, I mean case class and case objects into your companion object. And this is how the usage looks like. Color choice, dot custom, color dot data, correct? So the first class module aspect of objects helps here. They probably will convenient namespaces and you could use them like that, okay? So you have this in Java as well, right? Whenever you define an enum in Java, the cases, various cases are housed within the enum. So you will never confuse color dot red with signal dot red. The next one which we actually use is extracting common behavior in mixings. So algebraic data types, right? Even though they are algebraic data types, they are still a class hierarchy. And all the abstractions which are available to, your regular classes are available to your algebraic data types as well. So let's say you have some common behavior in your entities, which you want to extract outside. You can do that. The example I have here is enum, forget the E manifest part. So what this interface says is that you define the all method for me and I give you the from string method and you can provide some more method like that. And I have another entity here called directive, which has two case objects, but the companion object here extends enum of directive and provides a method called all, right? So I can say directive dot from string and I can pass it a string and it will load the appropriate object for me, okay? So the thing to note here is that something like that cannot be done easily in a languages with algebraic data types are an opaque structure. This implementation keeps them as a regular class hierarchy and the various details are open to you. If you wanted, you could customize the pattern matching and extract common behavior into mixings and many kind of things. The next example that I have is that of pattern matching. So as I mentioned before, pattern matching happens to be magic in most languages. In Scala, it's not and let's see how. So here's the contact. Let's say you want to match a value in a pattern matching block, simple match. You don't want to do anything else. Just match the value. It will be a Boolean test, correct? Matches or not matches. So the contact will be a to Boolean. That's all. Now, more advanced pattern matching use cases. We not only match the things, but we also extract values out of it, correct? Those of you who were there for with stock must have seen the list deconstruction, right? X colon colon XS gives you head and tail apart, right? So for things like that, you must be able to extract out the value. So that's the contact you need to fulfill. A to option B, the option part tells you whether the matching happened or not. And the B is the type of the value that will be extracted. There is one more variation which allows you to extract multiple values all at once. So option part here, again matched or not matched. Sequence of B, the multiple values that were matched. By opening up this contract, even your non-algebraic structures like objects, regular objects can benefit from pattern matching. You can preserve your encapsulation. You do not have to show your private variables. And you can write your own on an apply which exposes your object in whatever way you want to, right? You can masquerade as whatever you want, which is an advantage. And this feature is called extractors. So why is it an advantage in object orientation, right? The common theme is that of encapsulation. There are benefits to be given from not exposing the state. So you can have that as well and pattern matching too. Now regular expression is a class which I have taken as an example. So the class regex in Scala has the method called unapply seek which allows me to do things like this. What I've done here is I have written one regular expression called module ID and there are two capture patterns there. Okay? And the unapply seek in regex class is defined in such a way that whatever the capture patterns are there will be emitted back. So you can match anything like that against these regular expressions and you can extract the values there and do whatever you want with them. Okay? Then one more use case that we have since the contact has been opened up to you, you could actually abstract over this in any way you want. So what I've done here is there's a class called matcher which takes a Boolean condition, a predicate function and lifts it into object which will, whose unapply will call this function. Okay? As simple as that. So you can define pattern objects using this simply like this. This is how the implementation looks like. The matcher class takes the condition as a parameter and the unapply method will invoke that condition. So I define two patterns here and then I say I create a third pattern by composing these two patterns. I say even and positive, right? And is a combinator here which will make sure that both of those are satisfied and then the match will happen. So I can say six match case even positive, six is both even and positive. So you get true. And the implementation is pretty straightforward. Matcher is nothing but a wrapper for your function. So all you have to do is create a matcher which calls both and puts an end between. Another interesting thing about pattern matching in Scala is that even pattern matching blocks themselves are values. So I've defined two blocks here, right? And stored them in the variables. And now what I can do here is I can compose these two using regular combinators. So this is what I've done. I took first block and the second block and I said all else. All else is the combinator to compose these two partial functions. And as it happens, all else is not a special operator. It's just another method. And I can call compose block on drop. And you must have noticed the drop is in the second block. This will be invoked because the resultant partial function that you have is available for all these cases. You can even ask it whether it's domain contains any argument without even invoking it. These are the kind of things you can do. And these are pretty useful for things like exception handling when you want to separate out your handlers, compose them together and things like that. Now we are seeing a theme here, right? That many of the features that we take for granted or are magic in many functional languages are have some interfaces in Scala which are composed of traits, objects, implicit and other code primitives. And they're open to you because of which you can use them in ways that other functional languages you do not allow you to. By the way, these pattern matching blocks, I like them so much that I have put them to closure. So you can check out my GitHub for the implementation. So the next part I have is slightly more advanced. So type classes. Type classes is a very popular feature in high scale. By the way, how many of you are aware of type classes? Okay, so I'll just briefly explain what type classes are. You can roughly think of them as interfaces in your languages, except that the implementation inside is outside the data type, okay? So let's say there is a data type called integer, it is there and you want to add a new contract compatible to that data type. So in a language like Java, that will not be possible because the data type was already defined by someone else. You can't really go and make it implement new interface. In case of type classes, the implementations inside outside. So you can extend the data types even after the fact, even after when they have been already defined. So now in Scala, we have type classes, but again, not as a special construct. They're implemented using the three constructs that I introduced before. Tates, objects and implicit. Tates and objects, I suppose, we are all familiar with. Implicits is something new to the table. So Scala's type classes started out as poor man's type classes. And then they evolved into something much better as I'll try to show with an examples. Before moving to type classes, I'll take a moment to talk about implicit. So implicit happened to be the most misunderstood part of Scala. And the reason probably is that the word itself, whenever you say the word implicit, what comes to people's mind is implicit conversions, which as we all know from our JavaScript horror days, is not a good idea, right? Integer becomes a string without your permission and things like that. This gentleman here captured the confusion very clearly. Sometimes I wonder if the word implicit is part of the problem. It connotes something magical. According to me, better term for implicit would be evidence or witness. So whenever you have an implicit parameter on a function or class, what it gives you is a compile time evidence of some fact, okay? We'll see with examples how. So I'll go to the console. So if you look at, so as you can see, I was able to sort the list, right? So what does the type of this sort function look like? Let's see that. So here's an interesting signature. There's an implicit parameter of type ordering B. What it says is that give me an evidence that the type, whatever the type there is, in this case integer can be ordered, okay? Once it picks that evidence from somewhere, it knows that the element type is orderable and it will essentially work. Whereas if you try the same for data type for which this is not there, it will not work. Let's see an example of that. So this guy says no implicit ordering defined for object, correct? And this is pretty cool. And the cool aspect is that these evidences, right? They compose. So let's take an example of logic programming. So you can make statements like Rama likes mango, okay? And I can say Sita likes whatever Rama likes. So therefore we can have a conclusion that Sita likes mango, correct? So here Rama likes mango, a statement like this is a fact. Whereas whatever Rama likes, Sita also likes is a rule, correct? These implicit allow you to do exactly that. And I'll show you an example how. So in this case, there was a fact somewhere which said integer is orderable, okay? So now I'm going to create a list of people, persons, on which I want to sort my name first and then on age, age. Essentially, the fact that you, or rule that you need is that if A can be sorted and B can be sorted, then A comma B can be sorted, correct? And we have a fact that integer can be sorted and we have a fact that string could be sorted. So it all works out. Tell me any function that goes from A to B and I will work as long as there is an evidence that B can be sorted. So I'll convert, so I'll give it a function which gets me a name and age, a tuple of that and this will all work out, right? So as we can imagine, it will be very difficult to add an evidence or implicit parameter for each pair, right? So I need this kind of rule mechanism which enables that. Now I'll get back to the presentation itself. So the implicit are really a very generic mechanism. I will not go into details of that but I'll just give you a showcase of what are the kind of things it can do. So we can use implicit to pass regular parameters. Then you can use implicit to pass an evidence that A can be seen, ask me a question. No, maybe you can go into the implementation. I don't have an ID open. Okay, I'll answer that question. So there will be an implicit value, right? For integer, which says that, in fact, let's actually do that. Sorry about that. So here are the bunch of evidences that you can pass and so there is an evidence that A can be seen as B or a function from A to B, right? This is the notorious implicit conversion, which is by the way not the most common use case for implicit. There are others where you can prove A is subtype of B. Then other things like that. And then there is this interesting guy called evidence, T of A, right? So T here is a type class, the ordering that we saw there and A is your type parameter, okay? So you can say find me an evidence that there is an, there is a value of type, implicit value of type, T A, and this will work out. I'm not going to the others. So there is a talk called, there is a prologue in your scholar, which gives you a very good intuition for these things. So implicit values are like facts, as I mentioned before, and implicit depths are like implicit, sorry, rules. So the question you asked right, there is an implicit depth somewhere, which says, give me an implicit ordering A and ordering B and I will give you implicit ordering A comma B, correct? That's why it all works out. So as we have seen with a couple of examples, so in, in, in scholar, right? Type classes are regular traits and the instances that you define are regular objects. The only thing is that is different is the, you put an implicit marker on front of it so that compiler knows that this has to be, this has to be looked up when it does the implicit look up. And advantage of this approach is that if you want to abstract over these things, you can, like with any other regular function, any other regular method, it does not require any advanced machinery. Now, the idea originally came from Haskell. However, in Haskell, type classes and instances happen to be second class citizens. What do I mean by that? They have their own special syntax. They are not really values and you cannot really abstract over them using regular functions. You cannot pass type class instances to a function, for example. You cannot store them in data structures and stuff like that. They're also global and non-modular. And if you want to abstract over them, you will need more extensions like constant kites. If you want to do something more advanced than regular use cases, you will again need even more extensions like multi-parameter type classes, functional dependencies, and so on. But since Scala implements them as a first class entity, now this is necessary. So in Scala, these are first class citizens, regular objects and classes. They are not global. They are regular values, right? You could imagine, you could put them in an iterate, whatever you have. You can abstract over them using regular language features. And the advanced use cases that I mentioned in previous slide, which require multiple extensions in Scala. I'll just regular use case in Scala. So there is this implicit calculus, which is being worked on by Mr. Philip Wadler. This is the guy behind Monarchs, by the way. And they are taking this further. So I have covered a few examples. And I suppose you must have seen a theme here imagine none of the features mentioned where native, so to speak, or primitive to the language. They were implemented using the primitives that we talked about before. So there were traits, there were objects, and there were implicit. And all of these combined in some interesting ways to give you most of the benefits, I would not say all, most of the benefits that you get from the equivalent features in other languages. And this doesn't end here actually. There are many, many more examples, such examples. So for those of you who attended Moet's talk, there were signatures and funtuses, right? Who have seen those signatures and funtuses in the talk? Okay, so signatures and funtuses, those can be implemented in Scala again using the same traits and objects. This is a very small screenshot, but it's the exact same example. This guy is implementing a queue using regular traits and objects. So these are the kind of things. By the way, even we use this module pattern in our code base, and it works out pretty well. We don't have to have special syntax for funtuses, signatures, it all works out with this simple pattern. And the advantage of having this is that, as I mentioned before, everything that was available to classes is available to these guys. And abstracting over them is really easy. So there are other things, I will not go into that. In summary, Scala comes across as a very complex language. That's a common perception and which is true from certain perspective. However, there is a method to this apparent madness. It's not all done in just whimsically. And unification can give you a simple mental model to work with. So imagine a language where you have all of these concepts, okay? I separate separate components. How heavy that might be. That would require more syntax, more syntax, more specific features, right? In Scala, that's not the case. Somehow the things, even the use cases that you have not thought of, will work out quite well. And one of the examples that I can think of from myself is that, so the functional dependencies, the term I mentioned before, right? That's pretty, that requires an extension in a scale. However, that's something Scala programmers, when they define type classes, end up doing automatically, but they don't know, this is a special thing in there. So this is, again, a fake Aristotle, who says the whole is greater and simpler than the sum of its parts. Now, I hope you have not done too much cool it because there will be downsides to everything. In computer science, everything is a dead off. So one man's unification is another man's conflation. So by conflation, you mean, you are confusing two concepts, which are really distinct as one. And when you do something like that, right? This is not an easy fit. And when you try something like that, there will be downsides, there will be some friction. So there are ideas which do not work very well from various so-called paradigms. And there will be some mismatch. When it shows, it's going to hurt real bad. Other practical problems that I think are there, like too much rope, and you can have awkward metaphor mixing. So you can have something which you will not be able to tell whether it's an object or type class or what really. So if you want to maintain idioms and styles in a large team setting, that will be largely a matter of convention. And I would say this is true of any programming language, right? Any advanced programming language to be more clear. So even this, right? You could do whatever you want. So we trust you as programmers. We give you the power and it's up to you to judiciously use that. So yeah, there is no free lunch. There are always trade-offs. And if you want to hear some really genuine criticism of Scala, I say genuine because there is a lot of criticism of Scala and most of which is frankly bullshit. If you want to listen to some genuine criticism, watch talks by this guy, this is Paul Phillips who worked on Scala compiler for five years and he hates Scala. So yeah, I'll link you to the talk and do watch if you want to see the other side. Now the, oh wait, now the takeaways. So there was this nice quote on Twitter some months ago. Software is overrun with absolutist movements and solely lacking in nuanced context-aware analysis. This captures my frustration with software industry very well. We're adamant on making these groups for I don't know why. There are functional camp, there is an objective camp and they want to fight. And if you look at the Yale masters rate, you would not find them fighting. They will be having lunch together while this guy is fighting. Very well. So please watch it with full attention. Oh, it's not there, right? So yeah, exactly. Toad toad you are. For those who don't understand Hindi, it means break the wall. This is from a famous Indian advertisement of Ambuja cement. Where two brothers want to break the wall between them. So yeah, conclusion. Keep an open mind. Stay humble. Forget your paradigms and embrace ideas. And that's all I had yelling. Thank you. Links. So I will upload this presentation later. And in the presenter notes, you will get all the resources, papers and talks that I referenced. So, small hiding speech. We are hiding. If you want to work with Scala, me and my colleagues, please join. Thank you.