 So, hey everyone, good morning, welcome to this session. Okay, quick introduction, my name is Ankit. I'm one of the co-founders of ClearTax. For the longest time, I've been a big fan of functional programming. Over like 10 years of coding, I've been a huge fan of functional programming. But I never really had a chance to work with any functional language in production. So, in ClearTax, when we were actually launching a new product, that was a time when we got a chance to use F sharp. And I'm just going to talk about that story. But I would not call myself really a great functional programmer yet. I'm still learning. It's a very long journey. The talk will be structured in high level. I'll first talk about the product we were building. What was the use case? And why we ended up choosing F sharp? What was the reasons? Then like during the initial development period, what things we saw which were useful for us and what something talks or wrote box we had. And as time went ahead, what did we learn? And like, how did we improve the code base? And yeah, I'll summarize with some learnings as well. So the product I'm talking about is a site called clearTDS.com. This is a B2B application for filing TDS returns. So very short introduction to the product. Like anytime you get a salary, you have some tax deducted from it, right? So the company, the employer has to do a TDS return filing with the government. This is sort of a quarterly filing. And it is very critical in the sense that they have already deducted this money from your account. So if they don't report it to the government properly, basically it's a huge hassle for everyone involved. And not just from salary, but any sort of transactions. For example, even if you were, maybe if you are freelancing, can you get money on contract or something like that, you probably would have TDS deducted there as well. So this product is the first SAS base product for filing TDS returns in India. And we built it and launched it early 2014. This is before we got into YC. So back then the team was very small basically. It was just two, three of us. So we decided to build this because it was a huge market need that there were no good tools for this problem. But also like while building this clear tax as a product was also there. This was the second product we launched. So I was just wanted to talk about the requirements of this product. TDS return filing software cannot be wrong because if you make a mistake in a financial software, if you make a mistake in a tax software, it's a huge deal. Especially in TDS because one TDS return may have thousands of people. And one mistake will lead to so many people suffering. So yeah, that is a huge, huge requirement. The accuracy is number one. The problem with accuracy here is that the TDS return format that is used to file a return with the government is a sort of very old arcane file format. Like sort of from the mainframe days of programming. So it's like a flat file format with a lot of weird inconsistencies. We'll go into that. And it's also not very well specified or well documented. So the next requirement is flexibility because it's a quarterly cadence. Every quarter you have to do a new return filing and the rules typically change. They will have some new sections added new rules added data format changes over time. So and this is also very critical because at least back then every quarter they would announce a new file format or a new changes to the file format. And it would be just immediately applicable. Okay. It is like it is released on Monday and applicable from Monday. You have to add support as soon as possible. So that was a huge, huge problem. And finally, the other need we had was since this was the second product we were building, we were not able to, you know, keep it as a full time focus for everyone involved. We had very few people. We had to move between products. So like after the initial sort of launch phase for a long period of time, it had like a quarterly sort of wake up. Okay. Next quarter is coming in. You have to grab whatever changes, go into the code, make the relevant changes, do it correctly and get out. So it was a product that needs something to be very simple to be able to get the context back into the system. If it is very complicated in nature, then you wouldn't spend some time, maybe a week just understanding what you had done last quarter. So lack of engineering resources, like number of people, lack of time, all of these are on the constraints we had. So the single largest constraint was the file format. So I've actually put a screenshot of the file format here. It's a text-based format. It has different types of lines. Each line has some mode. Each mode has different meanings. The meaning could change based on the time in the sense that for which quarter this return is, or which type of return it is, salary return, non-salary return and so on. And correction returns also have different formats. So it's a very weird format. And so the way they update the format also is like they had extra fields in the original format specification. Whenever new fields are added, they'll use an extra field is now meaning this. And if there are not extra fields, then some of the old fields that are no longer needed, they may replace that with the new one and so on. Since number of fields cannot change. So yeah, this is the format. So it was the sort of main blocker for us because we wanted to be able to do this very accurately. It is not something you can make a mistake on. And we tried a lot of different things to figure out what is the good option for this. Like we tried using very sort of procedural type code to read it and write it. But that was getting complicated. We tried to do DSLs, but like we didn't have really time to actually invest in this. We tried to use UI based mapping tools, but tools on top of the format. But again, it was not working out very well while doing this. I actually came across a feature of F shop called type providers and which really, really sort of helped me at that time. So okay type providers. I think our future that is F shop specific right now know the language has this as far as I know. So it's a it's a very interesting sort of way of metaprogramming at compile time. You give the compiler a sample of some data format. So let's say it is a Jason format or XML or CSV. And you have a type provider for that specific type of format. In the case of us, it was like a CSV file for that. So you have a type provider which understands what CSV files are what delimiters and so on are. You give it a specific example. This is a CSV file. I want to read it will add compile time. It will look at this data. It will generate types around it saying, oh, this CSV file has five columns. This is a date. This is a string. This is a number and so on. It will create a type around it automatically. And it would basically also give you parsing for free in the sense that it will automatically grab the field names and everything from the header and generate a type. So one second, I'll just do a quick demo of this because it's a very, very interesting feature. So one second. Can everyone see this? Okay. Okay. So I'm right now doing this in visual studio. Don't get alarm. Okay. So this is a F sharp interactive script. I can basically write any code over here and I can send it to the interpreter, which it will evaluate and put it over here. This I think is small. Can anyone? Okay. All right. One second. Let's see if it is required. Okay. So I've just done some setup over here. Imported the sort of library for a Jason provider. What I want to do is there is this sample file I have, which is a, which is a GitHub repositories. Jason, like if you hit the GitHub API for a single repository, this is the format that will come back. So it is sort of a nested structure with a lot of information. This is for the home group project. So it is nested in the sense that you have owner as a nested object. You have a lot of these things. And then you have some other nested objects again. So if you are going to build a GitHub client, for example, you need to support this format. You'll traditional approach would be you write this by hand, you write a class and then it converts to this and so on. And you annotate with the field name and everything. Okay. So what I'm doing over here is I'm just saying that this GitHub provider is a Jason provider of this. Sample file. So I'm giving it the part to the sample file I have and this path can actually be a URL. You can actually hit a remote URL and grab a sample at compile. So, so, so the type provider also gives you some convenience methods like from the sample data. It can give you the data back just to see what it is. So I'm getting the sample data back from this type provider. It is a fully typed object. So I can actually look at the fields inside the Jason and even the nested fields are properly mapped into a object. So, so if I evaluate this shortcut was not working any, the keyboard shortcut was not working. It worked now. So, so from the sample, I can get that URL back. But I can also load a new instance of this. So for now, I'm just taking a text file. I'm just reading it into a string right now. So this one is a different file. This is for the F sharp project. Okay. So I'm reading this second Jason and I'm just giving it to the parser. And so if I do this, the name over here is the shop. So it basically took any other instance of this type of Jason and past it into the proper fully typed object. And one very interesting implication of this is you can use this further in your application. Let's say that tomorrow GitHub's API specification changes. Something changes. So let me go to the sample. Let me just change on owner to author. Okay. So I've changed the sample find. GitHub has changed the specification for some reason. So when I reevaluate this, I say, let me probably restart the interpreter. Correct. Correct. It's doing the right thing. So the, what would happen is your code compilation would fail. So if the, if you're using the old name and you recompile it, it will give you a compilation warning wherever it was used throughout the application. So changing formats will give you compiler will directly say, okay, these five places are very using the old name and you go and change them. So yeah, this was a CSV provider, but there are built-in providers for like CSV, XML, Jason, Excel files, a lot of different data types. There is even providers for SQL databases and so on. So okay. So it actually simplified a passing a lot because I could use that official documentation had some format. I could use that specification and put it in and I could map it to my model very easily in the code. I could actually see the exact name that is defined in the format so I can trace it clearly. And the code readability was also great in this case. So we build this, the file handling part in a shop. We tested it a lot. We had a simple strategy. You take sample files from some other software or government software, you put it into our system, parse it, parse data, you generate it, do a line-by-line comparison, field-by-field comparison, see what fields differ and you go back and fix it. So it was a very sort of iterative approach and work very well. A lot of this was automated with F sharp. So the initial goal was to use F sharp mainly for this data part handling because we found a great tool for it. But as we started working on it, we really liked the language and it was working really well for our product. Problem domain. So we ended up writing the whole product in F sharp at that time. We didn't really have a focus of like making it a functional application or something like that. We were more pragmatic. We just wanted to get it done. F sharp gives you a lot of nice benefits because it is like you can use it at any level you want. You can use it as a nicer version of C sharp if you want or you can use it as a very high level functional programming language depending on what systems you use. So the end application sort of was core components like the parsing, the validations, a lot of these business rules were written in F sharp. Top level mostly was again written in F sharp, but it was more of a standard imperative style, controller actions and so on. All the blue logic was more imperative. So yeah, this is sort of the philosophy we had with F sharp. Be pragmatic. You are learning this language. You don't know it completely. So use whatever parts you know well, use them and try to get leverage wherever you can. But also like if you feel that there is a specific area where you can invest time and learn and use that for greater leverage later. Do that. So yeah, like it's you can't really expect to learn a language very quickly. Right. So our release was very tight. We had like six to eight weeks to launch, build the product launch. So it was a very focused execution period. So we learned specific parts of the language and use them. And over time, like we have actually gone back and changed some things which were not great, not well done. So mostly when we started out with F sharp, we use the basics like pattern matching, currying pipelines and so on. So I'll just give you a very, very short introduction about these things in case you're not familiar. So the pipeline operator in F sharp is actually a very simple operator. Its definition is as simple as pipeline of X and function X is the data is function applied with the data. So it sort of turns your code into a something like a unit pipeline. You're giving data in the left, passing it to the function, passing it to the next function and so on. So you can arbitrarily, you can have a very sort of long pipeline of like three, four operations like over here. And it just feels very natural. It's very easy to read. And for our business domain, especially it was very well suited for because there were actually multiple stages of operations we had to do in sequence. The pattern matching again, F sharp pattern matching is very powerful, especially because you can match on multiple values. You can match on a couple of values. So if you have listed conditions, if this do this, then look at other variable, look at some other variable, you have three top level variables. You can put everything in a single pattern match object and flatten it really into sort of a nicely readable decision tree almost. We might have gone a bit over, but like even single case, it tells whatever we ended up writing match everywhere. So, but yeah, it was a very, very useful feature for us. And so as I was learning the language and building the product, I realized that partial application is actually a very, very nice way to encapsulate logic from one layer to another. So I can give you an example here that, okay, let's say that you want to do some validations. Validations are dependent on, in this case, validation requires you to specify your quarter return type and the deduction. But at a high level, when I'm looking at it from a top level view, I just want to make, you know, for my UI, I want to give a function that says, okay, take this deduction, validate it. So I can freeze in the, you know, context dependent variables here, quarter and so on. I can give it a specific version. I can get a function which is just going to take a tag deduction and validate that. So this was very useful for like, okay, you have some low level code which is doing very specific things. At the top, if you want a generic type signature or something like that, you just give whatever you want to make it more general. The order of the parameters and everything really makes a difference though. So not sure if this is readable for everyone, but we also ended up like using very simple data structures to encode very complex rules for the, in this case, it was for the UI. So for the software, there were different modes of operation, different tools. If this field is editable, if the return is in this mode, otherwise it is not editable. Sometimes it is read only. Sometimes it is invisible. Sometimes it is hidden and so on. So we were able to just like write a UI spec in a very simple manner. So I'll just quickly run through this. Like at the top, some type declarations. A column specification is an expression and a function which says whether the column is visible or not function which says whether the column is editable or not. And these things can be multiple features because there could be multiple rules on top of it. So at the end, I can actually see like, okay, for a UI, when I'm looking at a date field, the rule is for the rule for visibility is show it always. So show the date always and edit it always. But for some other field like session code, show it only for revised returns and only edit if it is null and so on. So I could compose different conditions, put them very easily into a data structure like this and have some other layer, which is interpreting this data structure and creating the actual UI specification. So it's given me a separation between the rules, how what should be done from how it should be done and made it much, much simpler to implement and actually read also because everything was not very closely coupled together. So while doing the implementation, one of the key requirements of this project was that it needs to be able to handle large amount of data. And like you have people coming in with, you know, the input could be from the UI. It could be from some third party system payroll systems. It could be from Excel files or bulk uploads and so on. So you can have bulk inserts and a lot of operations happening simultaneously. So we found a very nice ORM which was actually built for C sharp, which is almost like a SQL generator. So we wrote us very small wrapper on top of it to get it working nicely with F sharp. And this is in the end like how you're doing databases. I'll give you an example over here. If I want to load a tax deduction by name. So I'm giving everything as a expression over here. So this syntax where I'm this syntax where I'm putting things in the angle brackets. It's basically a expression. This is a sort of special syntax in F sharp, which actually keeps the expression tree in the memory instead of compiling it into a function. So I'm just giving a condition, which is some predicate saying name equal to this given name ordering for this equal order by clause and pagination. And given these three things, I could give it to a low level method which would generate this equal like this case. Let's start from deduction when the M equal to that and so on. So it really worked well at this level. But of course, like since you are in a very hurried schedule, there were some mistakes that we made. So I think one of the biggest mistakes we made was we didn't leverage type system of F sharp enough. So when I was learning the language, it was like the functional aspect was very, you know, very visible how to compose functions, how to use these things, but how to use types properly was actually not very obvious. So we ended up like the data model was that there was one class again using class because it was the O R M was a C sharp C sharp specific O R M, which required a class instead of a F sharp specific type. And like this high level sort of model was used throughout in the sense that for data storage for business logic for UI, it was a common model across different layers. So code was pretty simple, but it polluted. I can give you an example that since this O R M did not support F sharp specific types like discriminated unions or algebraic types, we had to use nullable values. If you're using the level values throughout a code now you are polluting it with this null. So the right thing to do would have been like separate it out into a layer and do different layers of types. But we didn't do that. That was the mistake. So that's the same. So, yeah, like if we were actually doing a more of a, if I was starting this project now, I would spend a lot more time getting the types right and doing a more layered architecture instead of having same types used throughout. Some of the issues were sort of non obvious. So in F sharp sequences, sequence step sequence type specifically are lazy. So you have a long pipeline with some eight different transformations and use it as a sequence. It's great because only if you're evaluating only three items from the sequence only three items will be used. But it's also non trivial because if you reevaluate it, if you go from the beginning again, it is going to invoke everything again. It's not going to cash or memorize the results by default. So we didn't really realize this at first and we were seeing a lot of the operations being repeated again and again. So we had to figure out like at what time do we use a sequence type or when do we use a list type. So there was a separation between sequence and list list would evaluate and give me all the output. But now it is not lazy and so on. So it was a tradeoff we have to make. We didn't really realize this at the initial stages. So one huge sort of performance issue we had that we realized after launching was we use these expression trees that had shown you for database layer and so on. Very, very extensive throughout the code because it is very useful that it is like a strongly typed expression where I can actually get the string name from that expression using some code. I can inspect this expression. I can see what nodes it connects to and so on. So for actually the generation of the file, the final TDA certain file, I was using one expression per field, per line. So this was working well for simple smaller use cases, but that these expression piece are actually pretty, pretty expensive to build. So they are like, I think last I've seen was like few hundred times lower than creating a normal lambda function. So when we were actually testing this and figuring out the use usage, the profile I said 99.99% time is being spent only building these expression trees and evaluating them. So we actually removed it out from that, from that part. Instead of expression trees, we use something else to get like 100% improvement in the code base. So, but the profiler actually help us because we would not have realized this is the case without using a profiler. So this is also sort of minor problem that we had to figure out which data structures to use. F sharp works on the CLR. It has its own sort of parallel data structures though. So a list in F sharp is a different is different from a list in C sharp F sharp implement sort of a immutability. Data structure and it's a linked list and C shops lists are different. They are realists different namespace different classes. So main issue was that like again, third party libraries would expect a C sharp type. Then you have to convert from F sharp to C sharp. And again, it's also since a list was used throughout the code base and it's a linked list. Some types of operations were much lower. So wherever required, we had to replace with an array or some other type. So yeah, we had tooling issues also. So this was back in 2014. The story is actually much nicer now. But like visual studio, we were using windows and visual studio for developing this. The support for F sharp in visual studio is not as great as C sharp. It is lower. It crashes. Sometimes it hangs up. One very interesting thing is F sharp projects do not allow cyclic dependencies. So the order in which you include files in the project matters. You can't, you know, if you have file A and file B, you can't refer file D from file A. You can refer only in the order they appear. That actually is a very nice feature because any time you open F sharp project, you can read through it in sequence and you can understand it completely instead of like jumping back and forth. And it makes the dependencies very clear. This is a dependency of this and so on. But visual studio, for example, did not have an option for adding a file at a specific item. Every time you could add an item at the end. So we actually had to manually hand edit the project files and put it up and down and so on. So, so yeah, the, like the F sharp tooling has improved in visual studio. And there is also much better tooling now for non windows users. So you have atom and visual studio code and you have sort of cross platform support for it. But it's still like it's still not as good as the story you would have for ID on C sharp or Java. Maybe it's getting there, but it's not as great. So, but very interesting. It's case you were always hitting into us, like F sharp compiler versions would update and we often had this case. Can you write some code which works locally, you know, give it to a production box. It fails and it was very, very annoying to figure out what is the cause. And usually ended up being a compiler version mismatch initial versions. So for most of Microsoft's, you know, tooling, originally what used to happen was that when you install a runtime, it would install at the global sort of system level. They will move to a local sort of package dependency thing. But again, it becomes sort of little complicated to manage sometimes. So yeah, we run in, you should run into this problem a lot earlier, better now. So, okay, as we started using the language, we figured out like different, different tools that the language gives us to actually write code in a better way. So one other interesting feature of F sharp is something that it calls computation expressions, which is sort of a syntactic sugar for monads. And I'm not going to go into detail there. So it's basically like when you define a computation expression, you define bind and return to operations. And this, the sort of compiler translates it to a different syntax internally when it runs. So I can actually just explain with example. So over here, this maybe you see is a computation expression. So maybe it's not a built-in expression. It's a thing that we build ourselves. But it's like everyone ends up writing this, takes this from Stack Overflow or somewhere and puts it in the code base in the end. So what this code basically does is I want to calculate the late find for some TDS if you've not done it on time. But you can only get the late find if both dates are present and both dates may not be present. They are option types. So credit date and TDS date. If they're not present, I can't get a late find calculation. Otherwise I can. So what this does is inside the maybe whenever I call this let exclamation, if that right inside is none, it will fail at that point and it will return none at the top. So late find also is an option at the end. It has some value or none. So this is basically almost the same as writing this. You are taking one value, matching it with none, then go ahead. Otherwise again, go into it and so on. So this is much more harder to read. It is also very annoying to write. This is much simpler. And it can go to any level basically. You have four variables which are optional. So this actually was very useful for our top level glue code for controller level things where I'm getting data from parameters or from some data sources and deciding what to do. So this actually was great. But again, it without we used to have like a lot of nested conditions inside the controller. Because because like F shop will force you to pattern match on something if you don't cover all the cases. So if you have nested things, it would end up being like that. So we ended up flattening parts of it, wherever suitable. So this was also very interesting learning over time as you matured with the language, the compiler is very smart. So it can do type inference and you don't need to actually specify types. It will be able to figure out here. This is the function that is adding two integers. So we didn't really, you know, annotate functions with types often initially. But as we started working on it over time, we realized that the type hints or the type annotations that we give actually better suited for just reading it back. When you go back to that code after a few months, if you have annotated with the types properly, it's easier to understand what it does. Otherwise, like you have to go to the ID mouse over it and see what it means and so on. So we have started putting the annotations in manually. Also, we started doing type aliases, which is basically like, let's say I'm taking this combination often, like few things together in some double. So I can actually define a annotation that says this type is equal to this combination of types. And it's much easier to talk about column specifications than saying this verbose thing. So yeah, another like another regret I have is that we didn't really use the algebraic types, the union types well. So we mostly ended up at very leaf level in the data structure. Finally, this is either nullable or not or something like that. But we could have used or leverage the union types at a higher level, but mostly it was used at the leaf level. So yeah, this is something that we could have used better. So okay. One very interesting insight is that we initially assumed that like getting people to work with F sharp would be hard, but it's actually not the case. We have three people working on this product right now. One person has some prior Haskell experience. One person has worked mainly with just one is a college pressure and everyone sort of ramped up and was able to contribute to the product very quickly. In-depth understanding will take some time, but it's actually a much it's actually a very easy language to learn and understand. And I'm really, you know, glad that someone learned F sharp as the first language to work with. Okay, so finally, like it's regardless of what language you use, it ends up being that the way you structure the application really matters. The design of the application matters. So regardless of what language you use, you can write a bad application in F sharp and write a very good application in C. So like good design is always going to be useful and that was independent of language. And wherever we sort of made shortcuts or took shortcuts, even in the F sharp project, finally we had to go and fix them. But functional languages will help you make right choices around the way because they give you more tools. They have all these double types and discriminative types and so on. So when we started the, again, most of the libraries we are using were C sharp specific libraries since F sharp was still very young. But now it's actually a different situation. There are projects which use F sharp as the first class citizen. And you don't really need to make some of the sub optimal choices view we made earlier. Now there are F sharp specific warms and a lot of these things available at present. Okay, so right now F sharp is being used. The whole TDS product was completely built in F sharp and inside clearbacks. Also some features are running on F sharp. Most of it is still C sharp because you're not able to basically it's cooling issue. We couldn't mix and match two languages in the same project inside. So it was either like rewrite everything into a new language or continue working on the same. So it's not really a huge win. But like some of the other interesting things we did with F sharp was there is like a canopy, which is a DSL overselling name. You are able to write browser tests very easily in F sharp. So yeah, that's it. Any questions actually using the CSU type provider because CSU provider I can just say use carrot as a separator. Number of fields was different and we had to figure out the type. So first we had a layer on top saying if this is of some type go go to this provider and so on. So we had the logic at the top layer which would distribute across the type providers and you can actually provide the hints to the type provider. At least in the CSU provider, the file headers you can put a hint saying that this is supposed to be a string or this is supposed to be a int and it will try to coerce it to that. Yes, but again, like if it can't pass it, you probably want to take it as a string. And because you don't know which format it's going to use in the end. MMD by DDM by ISO format and so on. So like these three people are working on this project full time. The whole engineering deep right now is around 16 people. We have like clear taxes mostly C sharp and some of the projects are Java. So it's yeah. So initially like mostly it was a server side rendered application. But all the grud aspects were being taken over by JavaScript. It was like a grid based interface something like Excel because it is huge amount of data. We don't want to create one by one entry. So we had a JavaScript library in the front end which would render it in a grid and communication was like Jason. So this is on the. So when I when I evaluate this with a given year given quarter given TDS written type as a input, I will get a Jason specification. There was a separate layer which was just going over this running these things and getting a specification for the UI. So it's a function. So show always is a function which would take in some specification year quarter and so on and return. Correct. So just like editing, for example, I actually give some example edit always is function takes a deduction returns to all this. So it was going to. So there was a separate layer which would go through this data structure call the functions one by one. And like there would be if there are multiple functions, it would take a combination. This is being done the server side server side would evaluate this for a specific instance. If you're working on 2016 quarter three something, then it would evaluate this figure out what is what is required and create a Jason specification as output. Jason specification will be used by the front end to actually create the fields. So so we still don't use types as well as I want because it would require spending time rewriting the large parts of the application. So it's a gradual process. Still, it's not that difficult, but for the longest time, basically it was not a product which had fixed people working on it. Only recently we have now three people working on it for the longest time it was we were focusing on clear tax, the filing product. And then on this once in a quarter and so on. So it was like if we had time and resources, we would have made a lot of changes. But by now we have people working on it full time. So we are actually seeing a cost rate of improvement every quarter. So every quarter every three months it would change. Sometimes the changes are very small in the sense that one field is added or some field is marked optional. Sometimes it's like very big changes. This type is no longer allowed and we are using this and so on. Every quarter it can be toss whether it is a small change or a big change. So the government would release the sort of thing in the Excel file format. This is Excel which contains all the fields and so on. So you would take that, put it in the sample specification and when you compile it will tell you where it is breaking. Additional fields at least yeah but existing fields if they are removed they will be automatically highlighted. Additional fields you have to add yourself yes. So we don't really need to maintain history because once the government goes to a new version they will also stop supporting old versions. So in this case it was we didn't need to maintain versions of it. No, like in not the actual statements. The data yes but not the statements like one quarter. So Abhirajan is also from my team. You can actually chat with him later if you want. Any questions? So we didn't ship a bug in the file parsing which I'm very happy with. So that is the main thing. Like in the sort of new layers or whatever UI states and sort of that type of messy code we have ship bugs. Yes and but not seen a huge amount of difference at that layer but in the core business logic it is definitely helped. Anyone else? How are they different from? So let me see if I can just show it here. So this is a very simple lambda function. Add is a function which takes two integers and returns a sum. If I do so this guy is an expression. It is a quotation of expression of this expression. So using this I can actually probably need to import some. So the type of add is a function which is going to take integer return function which takes integer and returns. This one is a quotation expression of that same type. So I can invoke it but I can also go inside the expression tree and I can see the nodes, the expression nodes and I can go into it and see. So the whole expression tree is available at runtime. You can go into it and see what is there. It runs on the CLR and CLR will keep the generic type function. Yes, otherwise this is a compiled unit goes into the it made things nicer. Like I was able to write very readable code. So we use that we didn't realize the cost. So when we saw it is so slow, we had to remove it and user. So mostly when you have an expression like this, right? Typically what we ended up doing was let's say you have a type get a provider or something. Type anyway, we mostly ended up using expression tree as an accessor in the sense that given an object, give me some property of that object. So using that basically I can get the when I evaluate this expression or go into it. I can get this length property as a string. I can get that name back. So for generating SQL or mapping to JSON or something like that, you can take that name of that expression name of the property and use it somewhere else. And you get the benefit of static type thing over here. You as well as at one time you get the actual sort of name. So it's similar to reflection almost in our use case, but you can do a lot of things. You can actually build these expressions at runtime also. You can compose or create the expression tree at runtime. You can evaluate it and so on. That's right. Let me verify. I'm not 100% sure. There is a, there is a package I have to import which gives me a way to invoke. Can't remember system dot runtime dot something Microsoft. I can remember something can't remember it. All right. So I think we are done with the questions. We can catch up outside also and go go go go anything in detail you guys want. All right. So thanks a lot. Yeah. So yeah, you can reach me on Twitter if you want or email and that he attacks. All right.