 for a startup called perfios.com. And in this talk, I'm planning to share with you my ongoing experience of using or trying to use Erlang where I work in my day job. A couple of things before I start. This is not an introduction to Erlang. This is not an introduction to functional programming. This is what I would describe as a war report. It's an ongoing war. It's an ongoing project. It's not completely in production yet. It's still in a proof of concept, staging point of its life cycle. So I'll get started. I'm a functional programming beginner, even no vice you call. And I haven't used Erlang in anger earlier, except for very trivial stuff. This is the first significantly sized Erlang project that I'm working on. As I stated, this is very specific to a particular experience that I'm having. It doesn't really play to all of Erlang's strings and the areas that it was designed to tackle. With that caveat, in fact, even though the speech is slotted for 45 minutes, it won't take 45 minutes. I think I'll wind up in 25. With that, I'll get started. Perfios is my employer. That's where I work. I'll give you an overview of Perfios, because what Perfios does is very relevant to the kind of problem that I'm trying to solve with Erlang. Perfios was started as a mint.com for India. I don't know how many of you have heard of mint.com. Is anyone familiar with mint? You are? OK. Mint is effectively a personal finance tracking platform. Aggregate all your accounts under one roof. View reports that tell you about your spending and stuff like that. The founders of Perfios had considerable investments, and they were finding it difficult to manage them, track everything. So they came up with a tool which did it for them, and then eventually it evolved into a product. That's how it started. Perfios launched about six or seven years ago. Over the years, we have grown into a small company. We are not strictly speaking a startup anymore. We are a small company. We are about 40 strong. And from a single product, it has gone to a suite of products which caters to the individual, as well as businesses. The businesses that we cater to are mainly banks, non-banking financial companies. Primarily, people who are in the business of lending money. What does Perfios do? We collect and we analyze financial data. In case of individuals, we collect and analyze personal information and data from banks, credit cards, mutual funds, you name it. Whereas in case of banks, it's a slightly more complicated story, which I'll come to in a second. So how do you collect data? Consider that you are an individual user and you have 10 bank accounts. You join Perfios, you log into Perfios, and you provide Perfios with your bank credentials. Perfios then scrapes this data from your bank's website. RBI has mandated very recently that banks should make data available in a standard format to all third parties who are interested in aggregating data. This hasn't happened yet. This is just a mandate that has come out. So for the time being, and for the past few years, anybody who wants to get data from a bank is obliged to log in as their customer and then scrape it from their website. That's what we do. That's part of what we do. The other part is from bank statements. I don't know how many of you get e-statements from your bank, PDF and stuff like that. Users can upload e-statements to Perfios' website where they'll be scraped, parsed, whatever you call it, and then the same information will be extracted as we would have from a bank website. There are also credit cards, then mutual funds, stocks, and shares, and so on. But effectively, it is collecting data in one form or the other by scraping or parsing a bank statement, analyzing it, and coming up with useful information. So what kind of analysis we do? For instance, let me give you an example. If you are a money lender and somebody approaches you for a loan, it could be a personal loan. It could be a capital loan. It could be some other kind of loan. One of the things that you want to verify is, does my potential customer have the wherewithal to pay back my loan in stipulated installments? This is called income verification. Currently, the process is something like this. I approach a bank for a loan. They ask me, OK, show us your bank statements for the last six months. At this point, I have two options. I can print it out, like hard copy, and then give it to the bank, or I can email them my PDF statements. In any event, these are relegated to another team. There is a separate team whose entire job is to go through these statements, knock down all the interesting transactions, put it into an Excel sheet, then run some custom macros on that Excel sheet, and finally come up with some numbers which say, OK, this person is qualified, or he's not qualified, or he is qualified, but we will not give him a loan of more than so many rupees. His EMI will be so and so. Primarily, banks are interested in finding out if the EMI that they are going to charge a customer is within the limit of the customer's ability to pay, to pay safely, because at any cost, they want to avoid people who default. This process of data entry and extracting information from bank statements is very cumbersome. It takes days or even weeks, particularly in the cases of small businesses, where you have thousands or even tens of thousands of transactions in a month, going through all of it and finding out what is relevant. For instance, are there any bounce checks? Is there any outgoing payment which can be considered as a drain on this person's income? What is the average bank balance of this user on the 5th of the month, on the 10th of the month, on the 15th of the month, and so on? Perfios provides a solution where the bank, or the money lender, or the NBFC, non-banking financial company, can upload their end users bank statements to the Perfios platform. Perfios will analyze it and then come up with these statistics and generate a very tailor-made report for that particular customer. Some of our customers include Kotak Mahindra Bank, Bajaj Finsav, ICCA Bank, HDFC. So if you've taken out an HDFC loan recently, or India Bulls Housing Loan, that's a very recent customer. So if you've taken a loan from India Bulls for housing, then a chance of that, we have processed your data and generated a report for India Bulls. And that is the context. What do I do? I head this particular product, which helps money lenders, makes their lives easier. How does it work today? There are a few distinct components, one of which is, of course, capturing this data from the end user. I won't go into that in much detail. It's standard HTML, JavaScript, Flex, whatever. And we also have an API. Banks use one of these different modes to pass the data on to Perfios. Orchestration of everything is done by Java currently. Scraping and parsing, however, are done in Perl and Python. Bank websites keep changing practically every day, and we support something like 300 different institutions. So consequently, we use a dynamic language to parse, and Perl and Python, we have roughly a 70-30 split between Perl and Python. So that forms a significant chunk of our code base. Everybody talks to Java. Java invokes Perl and Python as necessary, parcels out the individual jobs and says, okay, here is a statement I won't parse. Please go ahead and give me the transactions or any relevant information. Or here are the credentials. Go talk to this website, fetch my users, bank transactions for the last three months, six months, whatever, and give them to me. Finally, the data is collated, and then we pass it to R for analysis, and R does some basic statistics and so on, and eventually a report is generated. This is what is happening today. So this is what it looks like. You have a user input layer, then you have an orchestration, then you have two different types of processing. One is scraping from a website, the other is parsing or processing a PDF statement, then eventually you collate it, and then finally you analyze it. Here, the problem or the area where we would like to see an improvement is this orchestration and parsing. We are growing at the moment. The number of transactions we get per day is increasing, and as we grow, we are finding it more and more difficult to scale up these particular areas. User input scaling is very easy. Java can easily handle it. Analysis is very compact, no problems there, but parsing and scraping, that is where our biggest challenge is. What would we like to see? We would like to see something like this. User input orchestrate, and then have a horizontally scalable layer where you can distribute parsing and scraping to one or more machines, or across machines, across nodes, clusters, and then have them all collated and analyzed. In fact, you can distribute this part also, but at the moment there is no need for that. This is what we would like to see. So scaling with demand is one problem. Then we need fault tolerance and distribution. If you use Erlang, then all these terms sound familiar to you. Bank statements are very bad. I work with them every day. Within a bank, you have any number of formats. There is no one consistent PDF format that a bank sticks to. Even some of the better banks, for instance, Citibank, has three or four. And don't even get me started on the corporate banks and the cooperative banks and so on. They are much worse. Within a bank, from branch to branch, from state to state, from location to location, the structure of the bank statement varies. Transactions may not be sorted. In fact, I don't know how many of you actually go through your bank statement and verify that all the transactions are there. Some of them don't even get reported for months. At some point, the banks decide that, okay, let's reconcile everything. They run some batch process in their mainframe and then suddenly some transactions pop up. And those are relevant to you. You don't really care because I think most people look at the particular amount in their bank account and say that, okay, that's somewhere in the ballpark that I expect it to be, it's cool. I don't really worry about all the integrities. When you're working in a company and you're looking at 30,000 transactions in a month, then all this becomes important. Because if you miss out on an interest payment, that may affect your working capital for the next month. If you miss out on a few payments which the bank is supposed to have cleared and it doesn't happen, then you have a problem. So bank statements, very bad. They're like, terrible. No, RBI has mandated that and they're supposed to provide it within a year but nobody is holding their breath. I mean, I'm sorry. No, it is just a declaration of intent. I mean, knowing banks, yeah, I wouldn't be surprised. In a way, we are counting on it because that's our bread and butter. So hey. So anybody can do that because then it's analysis. So CPU intensive is, our processes are CPU intensive. Whether you use Perl, Python, Java, whatever, it doesn't matter. There has to be some crunching. There is some crunching involved. Lots of ugly reg access and stuff like that. So what do we do today? Okay, let's say the number's going up, business is booming, good. Okay, we are hosted in Amazon. Let's move from a tier two machine to a tier three machine. Bump it up, add more cores, add more RAM and that should probably take care of it. So far it has been, you know. But we can see, even at the modest rate that we are growing, we can see that down within a year or maybe half a year, we'll be in trouble. This model won't scale indefinitely. It's not going to scale. We did do some horizontal scaling using Java. Some of it out of necessity. Some banks actually block your IP address if you're not based out of India. So if you want to scrape data from certain banks, you have to have a server in India. Most of our servers are hosted in Singapore, Amazon Singapore, but we are obliged to host some in India just because some banks expect you to use Indian-based IP addresses. So without some funky IP spoofing, you can't really pull that off. I'm sorry? So you have a bunch of issues and Indian cloud providers, please don't even get me started. I mean, they're terrible. So machines go down, they arbitrarily reboot and they're not Amazon. Amazon is not in India yet or rather they just came in and they're just settling in. But some of our customers insist on having their data kept in a particular data center of their choice and those guys are terrible. I won't name them, but we have had system restarts four or five times in a row with no notice. We just decided to plug out the hard disk and put a new one in. Fine. So these are the problems we've failed. So honestly, Java is not really the right tool for the job. Of course, it's a curing complete language. You can make it do all these things, but that's not what it is cut out for. So our solution is cumbersome, it's verbose, it's reinventing the wheel and I feel that we can do much better. Please. I'm sorry? Hadoop is actually strictly speaking, it has a lot to do with batch processing. Most of it is yes. It is true. You can use Hadoop. But then what I feel is that this is again a personal opinion, rather than a considerate thing. What I feel is that Hadoop is an overkill for a lot of cases. If you're dealing with terabytes of data, megabytes of data, then yes, and you want to do process it, crunch it over machines, yes. But when you want fault tolerance and scaling, I don't think Hadoop is really the thing to do. I mean, my hard disk got pulled out. I don't think Hadoop will particularly help me in figuring that out and moving the load over to a different node and so on. I'm sure it can with sufficient tooling. But again, like I said, I think that this is a better fit to the problem than. So these are problems which Erlan was made to tackle. So you have scaling, you have distribution, you have fault tolerance. And personally I find that Java very verbose. I don't know about you guys, but Erlang is terse. And it is not terse in the way Perl is because we have significant Perl and some of those guys swear by it. And most of it is line noise. Don't quote me on that. But yeah, so it's not terse in the way Perl is, but it is easy and it is still very compact. And I personally like to write functional code. It makes my thinking clearer. That's what I feel. Of course, you may differ. Well, is Robert reading here? No. Okay, I get it. I mean, Erlang syntax can look unfamiliar. Even I found it very surprising to find paragraph breaks and not really paragraph breaks. You have different kinds of punctuation, like comma, semicolon, and full stop. Nonetheless. But in a way it's like Lisp, if you get used to it after a while, you forget that those curly braces or the S expressions are there. You just start to see what it is. I had the same experience with Erlang. When I started off, I was a bit puzzled, but then once I understood what it was doing and once I got into the flow of things, and yeah, syntax is, yeah, initially, if you're coming from a C or Java background, then you'll find it pretty puzzling to read. I'm sorry. You would be surprised how many people write without, the kind of English that gets written, I mean, dropping vowels and, you know, punctuation is considered as, you should be from an English background before like 2000, I mean, 2005, I guess. So what are the constraints? We can change the scaling layer, or rather these are the constraints specific to the problem. I can change the scaling layer, not a problem. Perl and Python, we have significant investment in that and you cannot change the structure. I'll show you why this is relevant in just a few minutes, but Perl and Python cannot be changed. It has to be like a drop in solution. Whatever Perl and Python code that we have today should run with the existing system without requiring any change in the code. And there is a Java, there's a way that Java talks to Perl and Python that becomes relevant, that is very relevant and that should also be maintained, because that is how, that's part of the, you know, that extends from the notion that Perl and Python cannot change. So orchestration can change, scaling can change, collision and collision, all these things are mutable. We can change them and as long as they work, we are good. So this is what I thought about, orchestrate everything using Erlang. And this is something which Erlang excels at. You have multiple nodes, you have multiple machines, fault tolerance in scaling, yes please. So effectively, Erlang acts as an orchestrator, collects input from Java, any other client and then each nodes have a form of Perl and Python processes and then the data from these nodes is collated and passed on to Java and which from there it goes on to the analytics engine which is written in R and so on. This is the outline of the proposed solution. We are not quite there yet, but this is what we would like to have. Now we come to the crux of the problem. There are, I encountered a bunch of issues, in theory all this sounds very nice. In practice I encountered a bunch of problems. I'll try to break them down in, without getting too much into the domain specific information. So effectively, this is how we execute Perl and Python. We do not run them from the command line as you would expect. For instance, Perl followed by the name of a file. Or Python followed by the name of a Python script. That's not how things run. Instead, we run or rather we read every Perl script, every Python script is kept in a cache in memory. It's actually interpreted and then decrypted in memory but that's irrelevant in this context. We open the interpreter, write the script, let it run and then when the output is printed to a studio, we read it back. This is the gist of processing. And now this can be, we can be working on a website, as in we could be scraping data from a website, we could be processing a PDF file. But the process essentially remains the same. Open Perl Python, whichever it is, no Ruby yet, but even if that happens, it'll be the same. Then latch on to the interpreter, feed it all the code that you want to with necessary adjustments and then to fire it off, let it run and you read the result. All the Perl scripts, Python scripts have been, yeah. That's like, think of Captain America, Civil War. So we have like two schools of thought and one of them is the old guard is all Perl and the new guard is all Python. I personally prefer Python, but so we have something like a 70-30 mix of Perl and Python. Okay, now as a human, how would I do this? So I'll just switch over to a simple Perl program and I'll show you what I mean. Can you guys see this? I'm sorry? Zoom, how do I zoom this? Hold on, I'll increase the font. Hold on, we'll figure it out, no problem, what can I do? I can open it in some other. Yes, I just wanted to show you how the different variants of it. Okay, wait a second, but rather than spend time on this, let me just tell you what this program does. It's a very simple program. It tries to read a Basharci file and then print it to the output, that's all it does. So I have a .basharci file in my home directory. This program reads it and then just prints it, that's all it does. So think of it as like scraping some website and printing the data to the studio. It's a stand-in for that. Now, let me do it from the command line, hold on. Okay, so I have a Perl interpreter open in the command line. I've copied the entire Perl program into memory and now I just paste it here. So the Perl program is there. Now to trigger off interpretation, I'll just type control D. So this is my Basharci. It just read my Basharci and printed it to the output. So this is how a human would do it. So put Perl interpreter paste the program and be done with it. And that's what I said. So we don't want to run anything from the command line. It, you have to have, that's like a constraint which we have. You know, I'm saying that this is how a human would run it. I would, I'm just, I'll show you how we do it using Java in a second. All right, this is my Java program, which does exactly the same thing. Let me see if I can, I'm sorry. Yeah, yes sir. Ah, thank you. I'm still getting used to it. So, okay, this is my Java program, which does exactly the same thing. So you read the Perl program into a single string. You open what is called a process builder. A process builder is a Java mechanism for interacting with native processes. Instead of working directory, you instruct it to capture both STD out and STD ERR. And then you write each line of code. In this case, there is only one line. I mean, it's like one huge chunk. You write it into it. And then you close that writer at which point it's the equivalent of sending a control D. The process starts executing. And then you just, just wait for it to execute. This is all it does. This works, I mean, I don't want to run it. It'll take another five minutes. So this works. Now the question is, this is what I want to accomplish using Erlang. So how do I go about it? Yeah. So naturally, the built-in solution in Erlang to interact with Perl Python any native processes using ports. People here who have used Erlang can testify to that. I mean, this is the default solution, whatever. It's like your standard lab, or it's like process builder for Java. Problem is, you open a port. That is your Perl interpreter. You write your program. All these are easily accomplished. I got stuck at the point where I wanted to trigger execution. A Perl or a Python expects an end-of-file or a control D, a particular character to be sent to indicate that I was done typing. I was done sending input to it. And then execution should start. A port close will not do because port close will kill the program. I mean, it'll just kill the external process. So how do you do that? How do you do that in Erlang? Trust me, there isn't an answer. I trolled through, or as far as I could find out by trolling through the mailing list, somebody has asked the exact question. And the expectation is that Erlang ports were not written for this particular purpose. They were written for some purpose which the authors needed. And I can pass you that reference. I looked at the entire thread where people were asking the same question. So the expectation is that this was written in mind more like think of a regular OS port. You send a message and you receive a message. That was the expectation in which it was written. So consequently, unless you could figure out a way to send those control D, that particular character, to this particular program, there isn't a way. I tried sending the binary equivalent. I tried sending the hex equivalent, but none of it worked. If there is a better way, then I'm all ears. I'm willing to, you know. Like I said, if Pearl and Python, you know, we have a civil war as it is. So no further acrimony will be permitted. So yes, you effectively need somebody who is listening, who takes the code, does the execution, and returns the result. So in the absence of that, this won't flag. And yes, that's like a constraint. That's like fixed. I can't change the code. Yeah, I beg your pardon? STDIO flag. I tried with that. I'll show you some sample code I have, but hang on a second, please. Can you see this? Okay. Is that? Okay, so. I'm reading a program, opening a port. I'm sending a command. Effectively, that's in the entire program in one shot. Then I wait for, to receive messages from Pearl. This is what I tried. And I tried with a bunch of switches. X status is one, but I tried with a whole bunch of switches, but none of them seem to do the trick. I could have missed out something, but yeah, this is what I tried. If I remember correctly, by default, that will open five of script for three and four. Ah, on the. It's got it in, it's got it out. You have to say no STDIO to three, four. Okay, because otherwise. I did try this with another program, which behaved in the way Erlang expected it to and it didn't work. There's a program called GNU chess, which somebody has used as an example. You can send it moves and it'll send you stuff back. It's written C. I did try that. Erlang has given a simple C example of adding two numbers, I believe, of factorial. And I tried that also, that worked, but this particular case, it didn't work. Where does the Perl re-get its input from? Here, STDIN. Oh, okay, Perl reads it from a, okay. Let me show you what I was trying to do. Okay, I've opened the Perl interpreter. This is what I'm trying to simulate and I'm done. So effectively, I've opened the interpreter, which means that it is waiting, the STDIN of that particular process is where I'll be sending input to. That's where Perl expects it, its input. Does that answer your question? And Perl is not running anything? No, it is not running anything. It's operating as an interpreter waiting for input. I found that closing the port killed the Perl process from the Erlang program. Okay, because I tried a couple of variants of that and I couldn't get it to work. Okay, shall I move on? Yeah, yeah, yeah. I will, I'll definitely have a word with you. Okay, so I consulted one of the mailing lists and people who knew Erlang were saying that you should use a custom wrapper. Now, I could write a custom wrapper, but that would be time intensive. So I looked around for a custom wrapper and I found something which worked particularly well. This is the old exec library. It's written by Sergey Alnikov and he has written a custom C++ port program, what you just mentioned, effectively an agent which you can send to. And the library is well written, I like it, and there is a mechanism to explicitly send an end-of-file. You can send the atom, EOF, and it'll be interpreted, it'll be passed on to the custom wrapper as an end-of-file and it does the trick. Or rather, it worked for me. Let me show you the equivalent program. Okay, so you read the program, read the program, and then you spawn a watcher process that's like one way of doing it. You send the program and you can send an EOF. That's what I found interesting. That's exactly what I wanted to do. It's the equivalent of a human typing control D on the command. And then you can, am I, yeah, I think I know what's happening. Yeah, because it, yeah, thank you. So I should just stop. Yeah, I'll stay here, but I shouldn't have to. Can I get some help, please? In effect, it does the same thing. Okay, I'll, okay, forget the screen for a moment. I'll let me complete my thoughts. This particular library does two things. One, it lets you explicitly send an end-of-file marker, which tells for all that, okay, I'm done writing the program, now you can start executing. Two, it gives you a sort of a pseudo monitor. It's not like a real online monitor, but it pretends, or it gives you the same effect. You get a message once the program has exited, so that you know that whether it was a clean exit or a bad exit and so on. This serves my purpose. In the absence of further information about, you know, if I can do it better through the built-in library, which I would prefer to do, I'll stick with this, this is what I'm doing. Rather, this is what I'm working with, yeah. Bringing up the Perl interpreter. We had a talk, that is true. To keep Perl interpreter online, you need to clean the slate after every execution. For that, you have to make sure that your Perl program doesn't leave any traces. It should do house cleaning. Like I said, that's out of my hands. If I could, I would say that, you know, we shouldn't even be doing this. So, okay, so, come back here. Okay, so what is there now? I have a working solution in place. Like I said, it's not production ready yet, but it's used for internal testing and stuff like that. It uses Erlangs, where I'm able to run with multiple nodes in the same machine. Those are things which, you know, Erlang gives me out of the box. I don't even have to worry about that, yeah. It's the same as killing the port. But killing the port without sending control D will just kill it. It's like, think of it this way. You have typed Perl in your command line. You've written some stuff, and then you open another terminal and kill this process. What happens? Nothing happens, it just kills. It doesn't start execution. My objective is not to kill it. Killing has a different meaning. The objective is to tell it that input is done, so go ahead and get started with the execution. That's what I'm trying to say here. So the, currently, the solution that I've come up with uses JSON over HTTP. It's not REST, I won't call it REST because it doesn't follow all the principles, but the working solution uses JSON over HTTP. But preferably, I would like to use RPC because this is an internal thing. This is not going to be exposed to the customer. This is completely internal. And I feel that JSON over HTTP is just complicating things unnecessarily. In this case, there is no need to complicate this. The amount of information that you need to pass is very limited, it is very structured, and it's, see, user has uploaded his bank statement, or he has given us his credentials. Now, along with that, we have some other information. He has uploaded his, or rather, let's take a simple case of web scraping. He has given us his Citibank user ID and password, and he has told us that this is an IPin and not a Qpin. That's like a particular identifier that Citibank uses. So we know that it's Citibank, username, password, and type of IPin, Qpin. This information has to be passed from whoever collects it to Erlang, to Perl, so that Erlang can decide, this is Citibank, so an appropriate Perl script can be picked up and executed. No, it doesn't. In fact, that can be like, today it is Java. It's like a servlet or an API or some frontend flex, in fact, which talks to Java and then collects the data. So this is strictly middleware, right? The clients don't see this at all. RPC, I tried to look at a bunch of solutions. Google's RPC is YRPC. It's a metro solution, excellent Java support, but Erlang support is still in alpha stages. It doesn't really offer much. Captain Proto is another protocol. Again, Java support is there, and the support for Erlang is limited to serializing and decyclasing. There is no RPC yet. Even though Captain Proto is meant for RPC, the Erlang solution doesn't offer RPC just yet. There is J interface, which is interesting. I've used it for some simple tasks, but I'm not sure that treating Java as one of the Erlang nodes is the best way to go about this. You can do it, but I don't know if that is the best way to go about it. You can do that. The way I'd like to go about this, I'd like to focus on the other side, and wherever interfaces are concerned, I don't want to write my own library and then test it for bugs and stuff like that. I have my hands full with the other code. So yes, all this is possible. In fact, that would be a fun exercise to do. So what is the good, so this is the gist of it. I have another particular problem to address, but I'll come to that in a second. The good part so far is that the code is very functional in spite of me writing it. It's very trivial to scale. Getting a node up, same machine, different machine, it's pretty easy. And compared to Java, compared to what I'm used to, there is a significant reduction in lines of code which I like. And out of the box, I mean, merely using supervised trees gives me a degree of redundancy which is very hard to pull off in Java. Threads and thread pools and executors, let's not even go there. So that's it, it's just like, these are the things which I enjoy. I can worry about the problem rather than worrying about all the peripheral aspects. I mean, these are part of the problem, but I have a tested solution like 30, 40 years old at 20, at 20 years old, which will give all this out of the box and it's tested and validated. I could use more libraries and tooling because if anybody has used Java, one of the best things about Java is the ecosystem. You may like the language, you may hate it, but the ecosystem has practically everything. I mean, any arbitrary format, any particular thing, a million people would have used it to report it all the bugs and it would be ready to go. This is one particular area. For instance, you get the information as json and the way in Erlang to go about it or one way in Erlang to go about it is slap this json into a record. So I have a record which is the input that has come in. Now I need to validate it. In Java, I can do validation in a reusable fashion. I write a bean and I use annotations which are standard. For instance, a string must be a maximum of length, I mean, 64 characters. The string must be supplied. Some of this can be taken care of by RPC, but not everything. Now JSR 303 is a very nice standard. There are nice implementations and they are reusable. I saw some examples. Somebody had asked the same question and of course you can roll your own. You can roll your own using pattern matching which does this nicely. The problem which I found with pattern matching on a record is that I have to write one for every type of record that I use. If I use like two or three different types of records, I'm good, I can use it. But if I want to make it handle different kinds of form input, then I have a problem. I will end up writing the same code or at best abstracted one level away for every type of record. I would love to see some kind of annotation equivalent to pull the same thing off. There is somebody who has, I forget the name of the library, which is an experimental library which uses some form of a spec to pull this up. It is effectively munging Erlang code before and after to give this effect, but it's not stable or anywhere near that. Erlang stack traces drive me crazy. This is the stack trace I got and the reason for termination is E invalid. I couldn't figure out what is going on. I'm used to Java stack traces. I thought they were bad, but. So here what happened was this was a long Perl program and relatively long. And since the entire Perl program was written as one big blob into the port, it barfed, but it didn't tell me why it barfed. I chopped out the file in two, then in three, and then it clicked. Then it dawned on me that, okay, this is too big, it's being sent over TCP to the internal port program and that's why it's barfing. But it is not evident from that stack trace. It just says E invalid, what does that mean? So the solution is break it up into lines and feed individual lines, which works. But I would like to see, or rather, maybe I'm missing something, maybe the port program isn't well documented, maybe the error could have been improved, but stack trace left me in dire stretch. Other problems are cultural. This, I don't have much of a handle over. Some people say that Erlang is too exotic. I frankly don't understand what that means, but we're not talking about APL here, for God's sake. APL, yeah, it's exotic with Erlang. And we have too many people who are used to an imperative way of programming. The imperative furios are for company. And they're loath to give up their multi variable or multi assignable variables and mutable data structures and stuff like that. Beans are the ultimate weapon for a lot of people. And in India, I don't know how many of you know this. Finding a decent Java developer is a tough call because it's like a bias market. Everybody has too many options and finding a good developer is difficult. I mean, even if they like you, to get them to join you, to get them impressed or interested in your problem and so on, it's very difficult. Erlang is maybe an order or two or magnitude more than Java. I mean, it's far more difficult to find Erlang developers than Java developers. I don't know how many of you here use Erlang in your daily jobs. No, you're prohibited from raising your hand. So, okay, so, okay, so there you go. So that's a problem that we are facing with Java, but at least with Java you have courses, you have people in the company who know the stuff and so on. It's not like, and honestly, even in colleges, I'm sure that people who are good enough can pull it off, but anyway, that's a different discussion. Okay, this is a work in progress. What I would like to see going forward is I want to clean up that Erlang Java RPC. I'm looking for a good solution. I want to get that sorted out. And I want to distribute across machines in a secure way. That means TLS. I haven't really spent much time to TLS in looking at TLS and so on, because there's no guarantee that these nodes will be in the same cluster or in the same data center. Like I said, we are obliged to run certain machines in India, so which means we have to talk over secure channels. Now, auto scaling is another thing which I would like to see. If, for instance, Perl is notoriously CPU intensive, in our case, so if the CPU goes about, let's say 80%, then we would like to see a node brought up automatically. Robber's database is in traction and I want to get my colleagues by, so wish me luck. So if you have any ideas, then we are hiring. That's it, thank you.