 I work with a company called Concurve. Concurve is a travel annexing management solution company. It is a 22 years old company that has been recently acquired by SAP. So this expense report is about using closure in production at Concurve. So the problem that we set out to solve at Concurve using closure is to prepare the company for scale. Due to the acquisition we are about to get about 10 times more traffic and load and to handle that kind of scale and to build the kind of system that we need. We need a very decoupled simple system that can be built into layers that has a certain degree of facility and sophistication and a sense of harmony in the whole architecture. So the scope that we had when we started building the system was to actually rebuild some of the four components that span across multiple products. So the kinds of things that we are actually writing toward to solve are things like authentication, authorization, profile data, identity management and such. And the approach that we took was to build microservices with a shared nothing architecture. So a deployment model sort of has, you know, I mean, that allows for multiple versions at the same time. So we have an n plus one version so that, you know, the different lines, different products, they can talk to the different versions at the same time. And these are some of the standards that we use in a platform. So all of you might be aware of this when JSON. JSON web programs are for communicating the claims and 509 certificates are used for authentication between hosts for the services source. OO2 and similar for SSO and the latest thing. So when we started building the system, it is about, the story is about one year old. One year before, we started building the whole system. And in the beginning, we were a bunch of people who did not really have much, much closure exposure, much, much less about, you know, about closure in production. So the kinds of things that we used to build the prototype were some of these things. So the engine was for the build system and we have been using properties file, Java properties file for configuration because our ops are very, you know, so our ops stack, they are very soon to, you know, Java tooling. And they have got a certain, you know, system build to, you know, build the properties file, scan them for changes and all that stuff. So our initial prototype was based using properties file and for the web stuff, we used the usual ring and composure library. We were using the JETI web server and closure Java JDBC for the SQL server, JDBC access. And we also used cloud base for some of the caching stuff and the response time was, as you can see, this was not very great. But then this was just the first step. This was just the prototype that we were building. And then then we set out to, you know, build something better, something, you know, more, more third year. So challenges that we initially faced while doing the project development in production where there were these things. So at configuration and the initialization of the components, that is sort of a common challenge that many of the programmers have faced throughout the community. And they have emerged certain ways to solve those things. We also came up with a certain way of doing it that we will cover subsequently. And logging was another thing that was a challenge because the kind of logging that the Java community is used to that is quite sophisticated. And compared to that, what we were doing in the beginning, the logging was almost stupid to be true. So we also fixed that. We will see how we fixed all of those things. And the other thing that we really needed was policy-based resiliency. So in the age of micro-services where we have a little crash kind of a, you know, assumption. So a system must be fault tolerant. We must build systems that are resilient because it's 2015. And this is something that we were grappling with at that time. And the other problem that we were facing was how to, you know, tune things for performance, how to find out what the bottlenecks are. And how to in general have a robust system on that performance. And how we started doing that was to, you know, was to follow an entire pattern in enclosures which is to build a framework. So the closure community is very well known for, you know, preferring libraries over frameworks. But we found that we need a binding frame in which there are many of the sensible defaults and and the sensible handlers put in so that we can simply, you know, close the app in that shell. So we built a framework that has a certain number of things that it does to, you know, provide the kind of, you know, structure to the whole app. So this, so a shell, which is a framework that actually constitutes of certain, certain components that you can see. For logging, we are using Hitler 4J and log back with truth.login and we log all the stuff as JSON because JSON, JSON data is machine possible and that can be used for different effects as well as that can be, you know, put into different kinds of, you know, things that can do, do, do, do further analysis on, on, on, do those things. So one of the things that is, that is not very much spoken about in the logging scene is something called map diagnostic context. So when you log a certain statement in an event, there is a certain amount of context that we would like to put into the log, you know, the code. Something like if you are processing an order and if that order fails, so what was the order number, what was the context, what was the, I mean, I mean, what, what, what was the sequence ID and, and, and things like that. You would like, like on a capital. So that is a feature that is there in log back. That is there in log, log for J2 as well. So this is something that is not supported natively by closer to the log in. So we built a wrapper on the top of that as part of the shell that actually does the shell and that, that lets you, you know, log the context while you log on. While you log, you know, you see that. Coming to the web server. So we found that these latencies were not really good. So, so I had some experience with it. So after, after, after, after trying out and after, you know, you know, certain, certain degree of, of, of, you know, of, of, of, of performance system we, we chose to use this as our server. And for distance relations, we were usually using local data.geson. And that turned out to be not performance really. So we tried out the other options and we found that the share of this space on the Java Jackson library is really, really fast. And, and, and for the GDB connection pooling, and, and, and, and, GDB, GDB claims to be, you know, the fastest connection pool in Java today. But for some reason, it did not work for us. We, we, we used SQL server as a database. And in that case, we found that the reality was not really giving us the kind of performance that it claims on the site. So we turned back to CCCO and we thought that was well for us. And the site is, we also need to use something for, for our convenience needs. And we found that there are certain libraries like in Java, Java has a space for Netflix and Sala has Meru. And there's also a way to, you know, use this space in closure. But the way it works is not very composable. So, for the primary abstraction in, in this space is a command. And when you make something a command, that, that is no more a function. So things will not compose the same way once you have made it a command. So we needed something more, more, more flexible. So we, so we built an equivalent of CCCO using closure, using in fact 100 lines, lines of closure. So I'll talk about it a bit more. And I was, as I was saying in the beginning that the app manipulation was, was, was, you know, one of the challenges that we were facing. So the way that we solved it using the cell, which is the same way as CCCO's flow. So, so our entry, entry point of the code is in the Java code. Because it needs to consider the logging system before the rest of the, the app, app, and so on and so forth. And the reason being, so in logback, if a configuration is put into a logback.external file, and that file is loaded as soon as any of your, you know, logging classes are, are, are being. Means if you have an import statement somewhere, and that is touched if that class is loaded an end link, then the first thing that it will do is, means that CCCO's log is to go and, and, and load, and, and load, load that file. And that means that all of our other logging-related configuration has to be in that file, which is not really that we want. Our, our configuration stays in the properties file. And that, that, that is a single properties file. We do not want the configuration to be, to multiple places. So the approach that we took was, so, so we first, first read the properties file, then, then we set up the logging by the way of reading the properties from the properties file, and then, and then first those things as the properties, so that those properties can be read in the confidence that the log by files. And so continuing to that, it is called a closure function, and, that, that basically returns a ring handler. So the approach here is to basically derive the program's execution in, in, in two concrete parts. part, one is the initialization part and one is the runtime part. So one function, function function name, so the function name that is supposed to do the initialization is mentioned in the configuration file that we have and dynamically that function is resolved from the name space and then it is loaded and then it is invoked so that the function can return the whole initialized state. As it sets up all of the components, it resolves the component dependencies and then it finally returns a ring handler and after that the shell applies a bunch of ring middleware based on the configuration properties whether something is enabled, disabled and things like that and finally it essentially starts the server. So that is how the app initialization is done in a shell and for our resilience story, so as I was talking, so we wrote the closure equivalence of this score and it has been flagged with the fetch of breakers and of course, you know fetch pool and the kinds of metrics that are required to integrate with this dashboard. So today, we will in fact emit the metrics data via servers and events and that integrates with this dashboard so you can use this dashboard with the system so that you can monitor all of the commands and how are they doing and we also did a lot of performance analysis and tuning and after that, so the results were quite interesting. We will go into details in a bit. Now we have some millisecond response time within the data center of course and the reason that we had this as a goal from the beginning was that when you have a micro-service model, your every hop, every service adds up to your total latency so the user when he sees some response coming back, so the request might have traversed multiple services and if each service starts taking 100 milliseconds then your thing is really slow. So to lower the latency was really an important goal from the beginning for us and our whole system is tuned for that kind of latency in the database also. So to do this kind of thing, when you deal with sub-millisecond response time you have to actually measure only nanoseconds and you have to optimize wherever you see a certain number of significant number of microseconds because the threshold is really, really low and the other kind of thing that we did for performance testing and tuning was to simulate a lot of load because when we do some microbenchmarking it is very easy to get some good number but when you really put things into production when there is load then the system behaves differently because it is going through a different kind of source. Resources are being shared by multiple threads and the whole load behavior is very, very different. So we did a lot of load testing by expressing the system for long duration. And today so the external components that we have in our system, I mean the external components that our system interacts with are the ELK cluster where we analyze the logs and integrate with app dynamic so that we can do the monitoring of performance. And we have been using the market profiler in development as well as we leave the agent on in production so that we can remotely sometimes check that how are the things going there in production. And also a load balancer we have internal load balancers, we use FF5s and they carry out the health check using the software that we have for doing this. And as I was saying before we are talking to the people of server database and the cloud space for forecasting and after doing all of this we have learned that closure was giving us a certain degree of our advantage over some of the other technologies that are currently being used inside Concurve. So the advantages that we saw is closure has the sense of the first class values, values are first class. When I say that in a typical O language you would see that to read or write any kind of data you always need a method and in closure it is not that way so with the use of literals and the use of expressions so the values are really made first class and that reduces a lot of boiler plate we will see in the subsequent slides how that depends out. And the other thing that we found that is really a plus point in closure is the kind of the talk of time model that that closure poses via immutability and either isolation. So in closure when, so at time t1 when a value is say x and at time t2 when the value is changed to y then both x and y can live at the same time because the data is not unimmutated but a fresh copy is made via structure sharing and I am sorry that. So that means that so whatever data processing that we do that is very pro-concurrency means if you want to start some processing in multiple threads while changing the data you can even do that without worrying about that the data will change while while some processing is going on. So there is a sense of safety there. And the other thing that we found really useful was in a functional programming where the functions are first class we can pass around functions we have higher order functions and now transducer is in closure 1.7 that really really makes it very very smooth and performant. And the next thing that we found really useful was doing things at the WEPL because WEPL is a great tool for debugging and testing your little changes, your little thoughts and to conduct little experiments. And while the data changes are immutable in enclosure fortunately side effects are allowed because closure is not purely a functional language. So the benefit here is that the side effects especially IO that is quite quite straightforward that also comes with certain caveats but for the main part it is quite straightforward and easy. And being on the JVM is a huge win because in the Java introff we can tap into a lot of closure tooling so the JVM's maturity in terms of performance and robustness and support and a lot of closure libraries that we want to use in various scenarios. And one of the very important benefits of closure is that it is a very very small language. You can probably learn it within an afternoon and after a week of, you know, a tinkering with closure you can probably write some interesting closure programs. So the language being small and easy to learn and reason about has a huge advantage that people can get on board quite quite rapidly. Now mastering closure is a different kind of story because you really need to learn the paradigm well but getting started and then being productive with it that comes very rapidly. And closure is a home iconic language where you have macros that can manipulate the syntax and you can manipulate the syntax and change things at compile time and that gives you a lot of power. Though macros are advised to be used rarely, only in the cases where you really need it but when you need it there is nothing else that can help. So we will see the examples of some of these things. For example, starting with the first class value. So the snippet that you see here, this is a small fraction of our production code. So the profile data is really huge, it has lots of attributes and this is just a small part of that whole whole data set. So imagine doing this in something like Java where your code would be full off in a put statement, put this, put this, put this and so the code would become very very clunky but you think values are first class and they have a little syntax to this thing. It is very, very concise to put this kind of thing. And as I was talking about immutability, so it has certain features and benefits. And functional programming has a lot of features on its own joy. So here in this example, we actually take the metric sources and a JSON writer function that actually I may extract the data from those so that it can format into a certain, you know, for the send event. And you can see that it does a certain thing, I mean adjust just a couple of lines of code that actually does it. And it is quite quite smooth and fluent. And about side effects, you can see one more example here where we do a certain IO update. So we are doing multiple things here. We talk to the database and then we talk to the cache and in a certain error condition, we are waving at an exception. And the approach is Java in prop also helps us a lot to, you know, get into the JVM and use the JVM library and then the Java code. Here we are using, I mean this is function to, you know, compute the cache and really, really Java API. So you can see that Java in prop really makes it easy to, you know, plug the gaps, whether the closure library do not exist. And this is an example of a macro where we pass some contextual data and some body of code that is used to, you know, log, you know, log logs in context. So this is used by other macros that can actually pass entire block of code using the context. And the whole thing, so the body of code, so the body of code executes while the context is being set. So before the body is validated here, you can see that the context is being set. And finally, it simply unwinds all of the changes. So when doing all of this, we be able to do all of this just as it is. And then we cannot do all the changes. So the body of code executes while the context is being set. So that is the whole thing. So you can see that the context is being set. So it is, you know, it is just a very simple that we will be using this editor as an editor of choice. Some of the people use E-Math, the others use Eclipse and BI. And most of our things are configurable via the properties file. We enable the variable lots of things because the choices should be flexible. So another practice that we have is that all of the tests and the launch of the code, they must work from the command line. So if something works only from within E-Math or from within Eclipse, that's problematic. So we make sure that things work from the command line so that all of the things can participate in the same way that others can. And we make sure that we have the unit and the integration test. And we are still using code or test for writing our tests. And our integration tests, they are also written in Python just because we also need to check the interoperability of our services with other languages. And the other language that we have chosen to test using happens to be Python in our case. And we are using Docker in development, QN production. This is a choice that we made from the start. And we are using Naven's repository that's internal to us. We are using architecture as our Naven repository to store private jobs. And now some of the libraries that we wrote, they in fact came out of this effort and also, also factored out from, from, from, from, from the shell frame that we were discussing now. So the first of those libraries is called Keychain. And Keychain is for configuration lookup. When your service has more than 15 or 70 or 100 configuration properties, it is really cumbersome to, you know, come by, by code. Because you not only read the configuration, you also have to pass it because we are property spying. Therefore, you have to pass those things. You have to validate whether those, those, those values are same. And, and in certain cases, when you, when you, when you, when you do not specify things, you, you want the defaults to, to whichever. So to do, I mean, I mean, I mean, these things repeatedly, you, you would, you, you would want an abstraction that can actually deal with this, with, with these things declaratively. So, so it does certain things that when there was something fails, it, it fails fast. As in, if you, if you, if you do not specify a thing and that happens to be null, that will blow up after sometime, somewhere else. And you would have no clue that why it really took to place. So failing fast is, you know, it is, it is very, very important. And, and, and just to see, to see the example that how it looks like. So in property files, we have, let's say, something like this, that says that app in a function is so and so. So, for the way that we define the key, I mean, I mean, I mean, the third is really that here. So, we are using this, we, we, we define a bunch of, such, such, you know, property keys. So we apply something, something, predicate, just to see whether the value is valid. And, and how to, how to, you know, pass the value, that, that, that, that is mentioned here. And, and, I mean, one, one, go, you know, do you specify multiple, multiple, multiple, multiple keys. And then you want to use this. Do you do something like let them, it is a macro that is part of the thing. And, you need to set the value in, in the format. And, and once the value comes out, you, you, you can, you can, you can use it like a function. Because, because this thing is supposed to, to be a function. So, so the streamlines are a lot of our code because our code is no more going, you know, random validation check, you know, important in configuration and such. So, it adds a certain degree of, you know, foodness to our code flow. And, the next, next library that I would like to talk about is called Springer. And, the reason that this library came about is, inclusion for string contestation is a function called SPR in the main space called for the code. So, the way SPR works with multiple token, our, our human field that you, you know, so it runs a loop where it adds the value. So, all of the looping and the function called that, that really adds the, a certain degree of, you know, latency to it. Which is why, when Java, when, when you, you know, the concatenate strings, that is much faster than inclusion in certain cases. So, this library tries to look like that gap. And, the, so latency that we got across different kinds of music, as you said, so on the left. So, it is red, but it is showing us the way about this on this. So, on the left, there is a function called string contestation. So, in such cases, you would find the difference is a lot. And, this was actually done just to, you know, to achieve the goal of, of, of, of the sub-submitters in the cloud. Similarly, the other library that we wrote was for the request mapping for the cloud. So, the, we just urgently did a composure, and composure in time, we wrote a library called cloud and then we took find our, our, our code. We found that, you know, composure was a certain amount of time that, you know, you know, and So we wrote this slide to make it faster, and the kind of difference that we got is here. So on the left, first and foremost, it's for cloud, and the one in yellow in the end, that is, we are for cloud. So the amount of time that is spent at each district, that was quite a bit. And we are living in this production building, and I'm the next one, the next one that we wrote was called as far. So this is for WVC access to typical servers. So though this library is not meant only for typical servers, this industry works with all of the WVC data sources. So the main thing that we did with Asphalt was that it has a very functional design where at each level, whatever activities that you want to do with the different kinds of things, for example accessing the result set or passing the parameters, all of those things are very, very extensive. So you can replace and then pass your own version of offering things instead of the default that will get encoded in the WVC. And the other thing that we did here was that as opposed to for the WVC, where all of the parameters and the results calling values, they are treated as simple objects. So the code that is encoded in the WVC is executed because it is an object or a set object. Whereas whenever you see from Java code, you would be setting the exact type like set in, set long, set date, and so on. So it creates a certain kind of resonance whenever you are interacting with data users, which is why, for example, in Photoshop, you would find that it casts values as integers wherever they are integers because the WVC driver would have simply spent an object. So the solution that we have in Asphalt is to have type inside the visual. So once this is passed by the visual, so all of the annotations here that you can see are in the screen. So all of these things, they are passed and a kind of metadata is formed that is transparently applied to these people's statements whenever they are ready. So this adds to an incorrectness of the physical code. And subsequent to this, there are two performance libraries that I want to talk about. One is called CityS, and this is for comparative benchmarking. Some of you might know that veteran is leading micro-benchmarking libraries in enclosure. So sometimes it is very hard to see that how those two different benchmarks look like, and to see them, you want to put them side by side. So as you can see here, that when you put them side by side, it is much, much easier to not scan the data, and that's what the difference is. Unfortunately, the difference is not that the difference is right here, that's how it's supposed to come up here. You can add as many numbers of columns that you want, but that would only work if your screen is wide enough. Another thing that CityS solves is that micro-benchmarking by itself, it is not a complete tool for profiling. The reason being that when you run a micro-benchmark, it has access to all of the central cache, all of the memory bandwidth, and other sources. But in the real-life scenario, your code rarely works like this, and your code has multiple sets. When you run web services, you are probably running 32 sets from the web, maybe more than that, maybe 128 sets. So in the real-life scenario, all of your sources are being continuously contended. So the thing that CityS does is that it lets you run the converted benchmark using a certain degree of concurrency. That means that if you specify that my benchmark should run with, say, it's a concurrency level 40, then it would, you know, 40 sets, and it would start running those tests in all of the 40 sets. And suddenly you will see that the code that was performing better in a normal benchmark under the concurrency level one, suddenly it may perform worse than the punk-configure. And there will be more, because now your sources are being consented. For your CPU cache lines, they are being consented by the monthly set. So your memory bandwidth is a finite number in all of the systems. So in a single-single-set benchmark, if it was faster, it may become slower if it is multi-set. So it shows the cost of it. And the other library that we wrote is called to-do-to. So this thing finds the efficiency across the list. Whenever you are profiling code, whenever they say that your request response takes about 100-100 milliseconds, you do not exactly know that where it is pending to. So much of time. When you run a typical provider, that would tell you for the places where it is taking PQ time, but not the IO time. So to solve that, we wrote this small library called the Pugito that gives you this kind of result. So throughout the code stack, wherever it is taking different kind of time, so it gives you that distribution. That gives the whole request response to 980 for microseconds there, or did it spend such a different kind of time. For example, for this cache called RadioSend, other things take 66% of the total. No, sorry. So it takes 14 microseconds here. And so the one-one output you know that tries to drain down more. For example, the cache is at this point. So you want to know how to drain this down, and you can drain this down by getting more and more major points. So more of these things, they are actually discussed in an occurring book that I wrote, and it is published by the end of this month. And by doing all of this, so the takeaway that we had was that closure as a language is very, very well designed. It is very well thought. It makes it very productive. And the library ecosystem is actually very nice, but it might be slightly rough around the edges because we did not have everything that we wanted. But a lot of that may be related to the kind of use that we have. And Java in-trop is not the panacea because Java in-trop may not be the right answer for all of the needs that we have. And regarding adoption, when we took closure, you know, something, when we spoke with other stakeholders, you would find that other function programming languages, they are not as much of a challenge as the store is. So the same store is the previous president in a room that is, you know, degusting mindset, degusting a platform, so they just think over and all that. And the balance is this main difference is called the needed reaction because when people see a lot of challenges, they may not sort of sit out for a bit, but then after some time, when you show them good ingredients, that becomes really convincing. And of course, being a Hungarian is a huge thing because Java is quite acceptable. And with that, that's all I have. I guess I have over stopped the time. And I can just take the question and I will be around in the hall. So I'll take the first question first. So your question is that how did we close closure and how did we go over convincing the solution for the stakeholders? We've been looking for a language that is suited for 2014. So a language that was used to prevent the need and that would make the productive and have a very good thing. So ours is a new group that was hired last year. And we, in fact, started out with Kala because Kala happened to be a problem more and more popular. And in the beginning, after trying out Kala, so the people felt that it was more complex than what we really needed up for. And then they tried closure and they found it to be more accessible. And then they decided to go with closure and we started convincing the people. So we showed prototypes, so we showed demos that this is doable, this is possible, this is what we are concerned about and this is what that team wants to achieve for our development. So it took a bit of flow and flow, but then this will happen. So we did not really have closure experience but then they developed it. So we used, we used that for, and we used both the software for the development. So we have written a lot of scripts because software by default does not use the kind of partners that you need to do all of the things. So we have a few quite a bit of scripts to automate that stuff. We had a lot of, you know, I mean, dancing around that stuff. So we had multiple containers talking to the, you know, shared, shared, you know, I mean volume and embedded. So the question is that why did we decide to, you know, build the history functionality in the closure rather than using the script by default because the closure has Java in cross, right? So in history, the primary abstraction is a command. And when you make something, a command, that is no more a function. That is a command instance. And that does not remain composable in the same way. So you cannot do the kind of instrumentation that you want on function. And we wanted the flexibility of, you know, having the function that can be composed and instrumented so that we can do, you know, different kinds of things. We, you know, we, in fact, wanted to, you know, be very, you know, flexible about what kind of recipients we want to apply to the certain kinds of, you know, functionality. So we call them policy-based recipients. And for applying policy, you really need a lot of compositability and flexibility that we did not find in the command abstraction. So the moment it becomes command, it is no more a function. It is no more compositability in the same way that a function is. And therefore, you need to do functionality. So it's harder to write than the actual instrumentation. Yeah. So the question is that, so the question is that, in the app folder library, the typing feature, will it support both the JSON handling where you can build down into the different, you know, data elements and then type in those attributes, particularly, right? So currently, so the number of type things that are supported that are actually linked that map to type things supported by JDBC. So JDBC does not support JSON and building down into that. So the answers that you know for a start, but then if there is a way to, you know, do that, maybe it would be possible to, you know, extend a fault. So you know, do that as well. So the limitation that I'm pointing at is that JDBC by itself cannot do it. Therefore, a fault cannot do that also. Because it's limited by JDBC. So here's the question. Thank you for helping me all the way around. I'm happy to, you know, discuss again.