 Welcome everybody to the session of functional programming patterns for designing modular abstractions by Debashish Kosh. We are glad he could join us today. Without further delay, hand it over to you Debashish. Thanks Ashish for the introduction. Hi, I'm Debashish. And I see that my talk has been placed in the track applying FP and rightly so since I'll be discussing real world applications of functional programming with respect to design of reusable abstractions. So in this talk I'm going to discuss about how functional programming enables evolution of larger abstractions using the power of compositionality and reusability. We all write functions in our daily life as part of our programming exercises, and we all know that functions compose to build larger functions. But the compose only when the types align and types follow an algebra, we will see what we mean by an algebra. Hence we can say that the secret source of program evolution is composition of type algebras. Similarly when we are composing functions we are composing types and by composition of types we mean composition of type algebras. So the key idea behind designing abstractions that compose is to think in terms of the algebraic or functional forms. By algebraic or functional forms we mean the denotational semantics of the program, we will see what it means, and not in terms of the operational semantics. We mean the steps which the program goes through during execution. And as we'll see now, this is an idea which originated long back. It's not something that originated in recent times. We have been talking more about functional and algebraic techniques since the last few years. But in reality, people have been talking about the algebra of programming since quite some time now. In fact, before we proceed further let's travel back in time, maybe 40 years. When this gentleman, John Bacchus talked about the drawbacks of statement oriented programming and professed the idea of the algebra of programs in a functional setting. John Bacchus got his Turing Award in 1977 and delivered this paper as his Turing Award lecture. The notes in this paper are mine, but look at some of the stuff that's there in the abstract of the paper itself. He talks about combining forms for creating programs. He talks about combining forms can use high level programs to build still higher level once obviously is talking about composition in a style not possible in conventional languages. By which he means the imperative languages that we use today using statement oriented semantics. He is obviously talking about a programming model at a higher level of abstraction. And what exactly is that. Now this is a section from his paper where he talks about the various models of programming. And he mentions that programs that are based on operational semantics will define this term more clearly short shortly are conceptually not useful. It can work, but also can be bulky complex and may need to be mentally computed for a clear understanding of the semantics. As opposed to the operational models it also talks about the applicative models. And he mentions that programs written using applicative models can be clear and conceptually useful. He's talking about programming by algebraic composition. He talks about von Neumann models the one which we use today. Today's imperative programming model where which we where he mentions that it involves maintaining large complex states and once you have large complex states it's very difficult to reason about your program program it's very difficult to reason about your Now in his paper, he gives this example. It's based on the actual operations that take place during execution this is a typical imperative program for computing the inner product of two vectors. He says it is dynamic and repetitive. One must mentally executed to understand it. It's very difficult to get the semantics out of such program just by looking at the program. We call this programming at a lower level of abstraction. It quickly goes out of bounds as we start writing bigger programs, maybe for a smaller program we, we are able to figure out what it does, but as programs growing size and complexity. It becomes extremely difficult to figure out what exactly is happening. He compares this with another version, which is far more intuitive, because this program is built out of composing the algebra of existing functional forms. He has existing functional forms like insert apply to all and transpose. So, it's much more intuitive to think of the inner product if inner product of two vectors as following the three steps transpose multiply and then add. These are the three steps, which are needed in order to compute the inner product of two vectors and this is the applicative model that Bacchus mentioned in his paper. More intuitive, point free, and just like you define in math, even in mathematics, if we had to define the inner product of two vectors, we will define it like this only. And also note that it generalizes to multiple dimensions as well. All these arguments boil down to the fact that using algebraic composition based program development is far more intuitive than using the operational semantics. This is yet another quote from his paper, he talks about the ability to define the algebra of the program in the programming language itself. It talks about transformation of programs using algebraic transformations. So this brings us to the concept of algebraic thinking. When we talk about algebraic thinking, it's much more related to the denotational semantics of the program. Denotational semantics is a heavy term. What it means is that the denotational semantics of programs treats program elements as abstract mathematical objects just like we would think in mathematics at a higher level of abstraction. The core idea being that the semantics that these objects offer are completely independent of the respective implementations. So here, we are seeing a clear delineation between the algebra and the implementation. It's easier. It's far more intuitive to think in terms of the interfaces as snowman was telling in the last talk, or algebras, instead of thinking in terms of the implementation. So denotational semantics leads to algebraic thinking. That's what I will try to establish through this talk. On the other hand, operational semantics is the based on the program implementation how the program actually gets executed, which as we will see is not very helpful in terms of reasoning of the program or intuitively understanding what the program does. So as an example, let's look at the look at the algebra of a very familiar structure from Scala standard library, which introduces the algebra of an option, an option, an option is an optional element and element can remain there can be there or not. That's how we abstract the algebra of an option. The parameter the type parameter a we call the carrier type of the algebra. And the first one, the first property is the introduction form. What it does is it lifts an existing value into the algebra of an option. If we give option a then we get an option get we then we get an then we get an option which option which has the which has the semantics that it has an element. And if we give option dot empty in that case it means that the element is empty it doesn't have anything. So this is how we introduce the algebra of an option to an existing data element. Next one is the set of combinators. There are lots of combinators available as part of the algebra of option. And these combinators help us transform various structures. This allows transformation from the data structure or the algebra of options and we call them combinators. It helps us build larger abstractions out of smaller ones. Next we have the eliminators, which is useful when we try to when we need to get the stuff out of the algebra out of an option. Next we have some laws. When we talk about algebra, one of the things that we need are some of the laws which the algebra needs to maintain which the algebra needs to honor. And here are some laws short circuiting in case of flat map for empty and for map. So any algebra needs to have some binding laws for them to operate with the correct semantics. So this is what we mean by an algebra. It has the carrier type, the actual type parameter, it has introduction forms, it has combinators, it has eliminated forms, and it has laws. And we saw this in saw this with an example of a very common abstraction from the Stala standard library. Just to reiterate what we said earlier. We think in terms of the combinators, the algebras, it becomes much more intuitive thinking in terms of map flat map for which are the combinators of combinators and eliminators of option. It gives us much better intuition as to what the program is doing. Instead of thinking in terms of the operational semantics which are the implementations like some and none. We will see examples very soon. Now, this is an example of an entire module with an algebra, we saw just one algebraic data type. This is a module, which, which follows an algebra, this is the module of a monoid. A monoid as part of the algebra it has two elements, two functions, one is the zero the identity, and the other is the combined function which is binary and binary and associated. Along with this we have some laws, since we cannot ensure through the types that the function is binary and associative, we have this associativity law, we have these identity laws. So, the module has an algebra, and the algebra needs to honor a set of laws. This is an example, this is a complete example of that. Now, keep in mind the algebra of the module, we haven't yet talked anything about the implementation of some specific monoid for a specific data type, we have just skimmed across the algebra of the monoid. Now, this is interesting. Here's an example of how we can use our algebraic thinking to implement a combinator by reusing an existing algebra. We use the algebra of FoldL and monoid to implement FoldMap. And look at this combinator FoldMap and this is the complete implementation of FoldMap. We have been able to figure out the implementation, the complete implementation of a combinator just by using the existing algebras of Fold and monoid. We don't care about the implementation of FoldL or the specific type of monoid. We just use the algebras based on the contracts they publish. Now, this is the power of algebraic thinking. Now, once we understand FoldMap, we can think of MapReduce just as a FoldMap. All of us know what MapReduce does, right? And how we can do this? How we can think of MapReduce just as a FoldMap? Because the algebras of MapReduce and FoldMap connect together very nicely in our intuition. The types align. It's just reused at the cognitive level. Remember what Bakas mentioned about such applicative model of programming. Programs can be clear and conceptually useful. This is a very good example of that. We are reusing the existing algebras of monoid and Foldable and we can intuitively figure out how MapReduce will look like. And indeed, this is the complete implementation of the MapReduce combinator. People who have written MapReduce using Java or low level imperative programs will appreciate this implementation more because it gives you a clear understanding of what MapReduce does, only through the composition and combination of a few algebraic combinators. As I was mentioning, this is a complete MapReduce program abstracted as a functional form. This is programming at a higher level of abstraction. And this is derived intuitively from the algebras of a Fold and the monoid just to reiterate. So now we can say that building an understanding higher order abstractions is much more intuitive using algebraic than operational thinking. Algebraic thinking scales. We started with the algebra of a type and you can scale to the algebra of a module. And ultimately, as we will see, it scales till the algebra of a complete model, complete domain model of your program. One important point of this algebraic thinking is that it separates the what from the how of our program. The moment we are looking at the algebra, we are looking at the what of the program, what the module proposes, what the algebra publishes, what the algebra says, we are not concerned about the implementation of our program. Snowman was talking about the difference between programming to the interface and programming to the implementation. This is yet another example where in functional programming, we are using the concept of programming to the interfaces and not programming to the implementation. Now let's look at some of the recipes for an algebra in a statically typed functional programming language. Recipes means how do we exactly conceptualize an algebra? How do we think of designing an algebra? What are some of the things that we need to take care of when we are thinking of designing an algebra? As we saw an algebra needs to be polymorphic. When we talk about an algebra, it's never a concrete value. It's always something that generates values. So we call an algebra generative through the application of types as parameters. This is also known as parametric polymorphism. An algebra needs to be polymorphic on its carrier type. We talk about the algebra of monoids that generates a monoid over the, say for summation of integers, a product of integers, or for any element for which you can define a monoid, which you can combine for which you can define a zero element. I mentioned this before that an algebra needs to be lawful. I'm not going into the details now. Now an algebra needs to compose. We saw this example, we saw this with the example of full map, that by composing the algebras of a monoid and folder, we were able to come up with the complete implementation of a full map. So algebras compose restricted. This is an interesting and very important point that look at this example of map reduce. We have used a foldable. Instead of foldable, we could have used the algebra for least, right? Because their list is an is an implementation of foldable. But all we are trying to do here is fold and foldable is just the least powerful abstraction that does it. We don't need the additional power that list offers. We don't expose the additional power of list in our map reduce implementation because we don't need them. So the algebra has to be minimalistic or restricted with least. We have lots of other capabilities besides full and the implementation surface area goes up more error from more difficult to reason about. So we use so this is one of the important principles of any design. Use the least powerful abstraction that helps that does the job. Implementation independent. No hat till now we have we have spoken about implementations it's all about contracts types and type algebras in mathematics also if you have two functions one from a to be and the other from G to be to see. We should be able to reason that we can compose FNG algebraically to build a larger function H from a to see irrespective of the implementations of FNG open and algebra needs to be open. For example, for this algebra for repository, we can have multiple implementations for an algebra so an algebra must be open to interpretation or implementation. Here is an example from a from some real world stuff that I did real world coding that I did implementation of a trading system. Look the module is parameterized on type on on a on a carrier type M which we call the effect type. I will explain what effect means. So this effect type parameterizes the trading algebra. In order to implement trading functionalities we need, we need these three functions, we are just talking about the algebra of the algebra of the module we don't have any implementation. We just say that we have orders that generates a list of orders, we have execute execute function which executes the orders in the market, and we have allocate, which allocates the executions amongst the client accounts to generate client trades. What is an effect, an effect is an algebraic way of handling computational effect like non non determinism or exceptions or input output side effects continuations writing to databases all of these things side effects don't compose but an effect is an algebraic algebraic abstraction so effects compose as we will see in course of this talk. We have some other examples of effects, each of these represents a certain type of effect, and many of them relate to IO, instead of actually reading from the source we say that we abstract the computation into a type constructor reader, which reads from the environment, produces an A. It doesn't do the actual reading within the reader the actual reading will be done at the end of the end of the life cycle at the end of the end of the program when we submit it to our runtime system. And there are many more examples of effects, but the important point to realize is that an effect is also a pure value. So, this is the representation of an effect, there is a clear delineation between the result and the effect result that the effect computes and the effect itself the F. So is the answer that the effect computes and F is the additional stuff modeling the computation, which is the effect itself. I'm not going into the details of this side effects are bad side effects side effect but we need to handle side effects as part of our program and effects is one of the ways to handle these side effects. Database writes writing to message queue, all of these things. You know them right and effects are not modular effects in their modularity. This is one of the important aspects that we need to compose we need to comprehend that side effects don't compose. Now here, one important point that I would like to highlight here is that we are not using any specific type of effect. For us, the effect is effect here is M and we haven't haven't given any semantics to this effect what will the effect do because the algebra doesn't need the semantics as I was telling the algebra is only for the type signatures the Algebras are only for advertising the contracts that it publishes. So the effect types will offer compositionality even in the presence of side effects when we attach a semantics to each of these effect types. So we call this effect type and opaque type it doesn't have any denotation till we give it one and the denotation that we give to F by denotation I mean the semantics it depends upon the semantics of compositionality that we would like to have for our domain model behaviors. So how would we how would we like to compose the three functions that I mentioned as part of the trading algebra in order to generate the trade at the end of the program. It has a specific semantics and that's the semantics which would like to give it to the effect type as part of the implementation. Now this is the complete generate trade program we generate trades using the three functions that I mentioned above. And it uses a sequential compositionality we are using we have now given the given the effect type we have specialized the effect to be a monad. And when we have monad we have the semantics of sequential compositionality built in as part of the semantics of the monad. We have the algebra for trading module and we know that we need to compose the method sequential in order to generate trades you remember the use case you first we have the orders then we then we need to execute them on the market and then we need to allocate. And if any of these fail in that case we need to fail the entire chain, we can't have partial orders being generated. So it's the perfect example where we can specialize the effect to being a monad. And this enables us to implement the complete program to generate trades out of composing smaller behaviors. Each of these each of these orders executions and trades are smaller behaviors which compose together under the effect of a monadic composition in order to generate the trades. So yet again we are seeing an example of composition of the algebra of a monad with our domain algebra of trading we had our algebra trading which is being passed. We are injecting the algebra of trading as part of this function, and we are using the algebra of the monad, sorry monad. And this composition leads us to define a higher level function called generate trade. Now, this is one other interesting thing that our module, our algebra will ultimately do some IO. Initially, initially upfront we could have committed to the IO monad instead of making it parametric. But this is the beauty of parametricity that we can, we can say always always say that we need a monad, we need a monad M, and suddenly the only operators available to us are pure and flat map. So we need only these two functions pure and flat map, which are available from the, from the, from the monad, monad algebra, we don't need the additional power of IO to generate the, to write the program, the program contract only needs these two functions pure and flat map, and they're available as part of the monad itself. So with IO, we could have done anything in the implementation. If we had, if we had used IO directly, then then we could have used the additional power of IO to do many more things which we don't need to do. But it makes it makes our program error prone, because someone may, someone may like to use the additional power of IO to do some malicious things as part of the program. But with the monad contract as the effect type, you cannot do this. Once again, an example of using the, using the proper choice of algebras and using the least powerful abstraction that works. Now here is an example of one more algebra. It has the same effect type M, and it's, it does some accounting. So at the end of the trading process, if we want to do some accounting process, we want to post the account, post the balances as part of the, as part of the client account, how many trades she has right now. So we can, we can directly plug in this algebra as part of our program. Just the two things we did is we injected the algebra as part of the function as signature of the function and, and directly we could use the post balance functions, because they're all chained together, they're all composed together using the same effect type, and using the same kind of algebra. So now from generating trades to generating trades and balances is just a miniscule change, and a change which is very intuitive, a change which makes sense, just by looking at the implementation of the program. So this is another example of composition of multiple domain algebras. Now we have intentionally kept the algebra open for interpretation. Why, why did we do this. We did this, because we could have multiple implementations of our algebra. For example, say when, when we are writing one of the most convincing use cases is that for unit tests for unit tests we may want to use simpler data structures like in memory ones replacing the database tables, or using simpler monads like the identity monad instead of the more complex ones like future or IO. So using identity monads and in memory data structures we can write simpler interpreters, which will suffice during unit tests. So this is, this is one of the one of the patterns which we discussed earlier is to have the algebra open, and this is one example of this. And to define an interpreter, an example of how we can interpret a domain algebra using the services of some additional algebras. Note that we inject these new algebras to enrich our interpreter because we will be needing the repositories will be needing the repositories but we need the algebra of the repositories we are not we are not yet concerned about how the repositories are being implemented because when we are injecting the algebra of the repository, this interpreter will work, both in case of in memory data structures and in case of database tables, because we are injecting an algebra from the environment. We can have various variants of the repository in memory repository or repository implemented using using using some libraries like do be which does for relational databases. When we finally invoke our program when we invoke our interpreter, we will then pass the actual implementations of the algebras, the trading interpreter, the accounting interpreter, these are the interpreters for our domain algebra trading and accounting. And we can either pass the we can either pass the interpreters for the repository is explicitly or we can declare them as implicit and past there are various techniques and you can you can use one of them. The important part is that right at the end of the program where we are invoking the function. We have committed ourselves to specific implementations till this point in time, we have been using only the algebras. Some of the takeaways from what we discussed algebras scales from one single data type to an entire bounded context by bounded context remain the context of the domain, the domain algebra. Algebras compose enabling composition of domain behaviors we saw examples of this algebras let you focus on the compositionality without any context of implementation. We haven't yet discussed any of the implementations, and yet we have been able to cover a meaningful part of meaningful part of our domain algebra and meaningful part of our functionality, which we want to discuss. Statically typed functional programming is programming with algebras algebras of types abstract early interpret as late as possible. We saw the examples the interpreters came late in the life cycle, and at the final stage only. We have been dealing with abstractions at the algebraic level abstractions functions compose only when they are abstract and parametric we saw the usefulness of parametricity when we, when we had the effect types as opaque we kept the effect types as opaque and only incrementally declared or committed to the semantics of the effect as an when required modularity in the presence of side effects is a challenge. Hence we have effects effects as algebras are pure values that can compose based on laws. So we handle the presence we handles side effects using pure algebraic effects. These two terms sound a bit similar but they're entirely different things side effects and effects and honor the law of using the least powerful abstraction that works. There have been many examples many many many code fragments I see today that where people people use monad when they could have done with applicatives, people use monads when when they could have done with semi groups. So, this is one of the principles that use the least powerful abstraction that works. It makes the surface area of your implementation, much lower, and it, it, and your program becomes less error prone. So this is all what I had to say, and I'd love to take some questions right now. Yes, so we don't have any questions as of now in the Q&A section. So, attendees may type any questions they have and will be the passage will trying to answer that. Yes, it's Mikhail has put a question. The question is, have you played with any of the languages as APL or J. No, unfortunately not I would love to. I have, I have been a regular regular attendee at functional functional conf and every year we have great sessions on algebra on arabes languages. I just played around a bit with them but nothing serious, no serious work using the arabes languages. We have another question. The attendees asking, could you please share what you think are the weaknesses of this approach. One of the one of the weaknesses is, it has a fairly steep barrier to in a barrier to entry, because you need to, you need to internalize the concept of concept of many things in order to use pure functional programming. But besides that, once you are, once you are in this paradigm and once you are used to this paradigm. I think you will, you will find that the, you will find that it's extremely useful. You can write programs which, which are much less error prone, much more useful, useful and easier to prove some of the static correctness of the program, at least at the type level. If you follow the discipline of writing, writing exhaustive tests using property based testing. In that case, you can, you can, you can increase the correctness of your program or the surface area of correctness of your program by through the through through type level assurances, plus the property level assurances using property based testing. All right, we have which was raised is and which will please unmute yourself and then you can speak. We can't hear you. All right, we also have another question is tagless final conceptually similar to algebraic patterns. Yeah, actually, tagless final tagless final is nothing but one of the examples of using algebraic thinking because they're also you need to think in terms of the algebra. There are other techniques like free monads, both of them, both of them are sort of complementary though one can be combined with the other or one can be converted to the other. But yeah, tagless final is definitely one of my favorites as well, and I use it a lot in my day to day programming. We have one question from Joe fish when writing library functions it makes sense to have higher abstraction types for generic use cases with various implementation types. But does it make sense to use higher abstraction for non library private use case school where just the implementation type is required. Actually, using good abstractions is really a habit. And I don't really differentiate much across libraries and application programs, because in application programs you are modeling a domain in when you are writing a library you are modeling a much more general concept maybe but when you are say you are designing an application for the financial financial markets financial domain, you have an algebra to model right. So when you have an algebra to model why not ensure a separation between the algebra and the implementation. It's much like programming to the interfaces as snowman was discussing during his talk today earlier in the day. So, so good, good practices, good practices can be used to be respective or whether you are designing libraries or you are designing different programs and personally I don't find any difference between them. All right, we have one more question where is your next book coming. Oh, actually, yeah, I would love to write, write a more recent version of my last book functional reactive domain modeling but frankly speaking I don't have time, I'm not getting any time to do towards that but I would love to, because that book came out in 2016 I guess and it has been, it has been six years which is a lot and lots of things have changed lots of things have advanced a lot and I would like to touch upon the more recent stuff but I don't have time right now so maybe sometime next year. Also one more question, you mentioned the laws that have to be followed out of that languages that verify that the laws are really being honored. Yeah, actually there are theorem provers where you can where you can prove the laws. There are there are some, there are some dependency type languages where you can encode lots of those laws as part of the type system. And when we are talking about the mainstream ones like Haskell and Scala. The laws, not all of the laws can be encoded as part of the type system but of course you can encode the laws as part of your properties algebraic properties and use libraries like quick check and Scala check and generate and generate test cases which will honor those laws or at least test with those laws are being honored as part of your for your implementation. Alright, we don't see any new question coming up. And thank you Devashesh for sharing your experience with us today.