 Thanks for everybody that helped pull together and put on the conge. It definitely shows this is an extremely fun and engaging event. And it's definitely an honor to speak in front of a crowd and a community that I admire and respect so much. There's a lot of great faces here that I love seeing every year. So without further ado, I am Paul DeGrandis, and I'm going to be talking about production closure scripts. And it's really my lessons learned from building and shipping production closure script applications. I want to motivate the beginning of the talk with a question. What makes closure great? So this very handsome gentleman right there, stunning, said that closure manages risk better than any other technology. Indeed, we see that closure is really built to tackle the systems and the complexity of the systems that we face today. It's closure's opinions as a language, it's features, it's culture, it's community. There are all driven towards managing and minimizing risk and complexity. And it's closure's holy trinity of simplicity, power, and focus that are really the driving factors behind that. We all know this, of course. But this question is interesting in the light of closure script. Do the advantages that we find in closure translate to the areas where closure script lives? Nobody has really asked that question. Does closure script solve the same incidental complexities as itself? Is closure script actually a terrible idea and you can't really build stuff with it? We haven't seen any real data or real stories about this. So that's what we're going to do. We're going to constantly come back to that question throughout the talk. We're going to look at a case study and some metrics around that case study. We'll look at some compelling features or ways you can use closure script, as well as the risks for adopting closure script in a production system, things that I got bit by. And then we'll go through a validation process, how I validated whether closure script was appropriate for my team and my project, and how you can evaluate closure script and see if it fits your team or your project or your company. And then we'll pull out and look at different options, strategies, and architecture decisions that you can make because of closure and closure script, that you can't really make with other systems, and we'll drill in on those architectures and look at things like code organization, software design patterns, other concerns in the small. So that's pretty much what we're going to do. And just on the surface, closure script has some really compelling features, like you get proper name spaces and you don't have to deal with the idiosyncrasies of javascript. Those two things alone, pretty compelling. And then there's these meta reasons to adopt closure script. People say javascript has reach. What does that really mean, though, for closure script? Well, here's IBM ThoughtWorks technology radar from October of this year, just a month ago. And there you see it's advised you adopt closure, which is pretty cool, because now we're being told that we can get into the enterprise, and I guess Neil Ford's probably excited that he generated this exact graphic. And then here are all the things that are related to javascript. So just because closure script exists, we can inject closure into the conversation of all of those pieces on that technology radar. So we went from one little blip to cascading across the entire technology radar, and we can go ahead and start dominating all of those areas if we want to. But I want to talk about two other features I really like about closure script, and they're not super obvious and they're not particularly sexy, but they make real impact. And closure is a data format, right? Using Eden or the serializable chunk of closure as a data format has baseline advantages over JSON, right? You can capture things like keywords, sets, things that you just can't capture in JSON. But with reader literals, the notation can be extended into your problem domain. You can capture the real intent of the data in your problem domain because you're using closure as a data format. And that means a lot, right? When you don't have that, what's the outcome? What do you have to do? So you end up encoding all of these special meetings inside of JSON, and then you go through what I call the double par stands, which is you take a JSON string and you parse it, and then you have all this logic and machinery to parse it again to pull out all this special meaning. It's total nonsense, and it's just incidental complexity that we've become accustomed to. But with reader literals, that whole problem goes away. It doesn't even exist anymore, and that has pretty serious impact. Closely related to this is the reader, and closely related to the reader is the printer. This is one of the reasons why homo-iconic languages rock so hard, right? I can serialize anything and ship it around. I can change the relationship of where I want to put data and where I want to interpret that data. So what does a closure function take in, right? Closure data, and what does it return? Closure data. And that closure function itself, well, that's just closure data. So having closure script in all of these places that JavaScript has reached means I can start shifting around those relationships however I need to solve my problem. This leads to a very cooperative interrupt between closure and everywhere that JavaScript has reached because of closure script. So when you combine these two things with proper namespaces and macros, you render a system like MeteorJS completely obsolete, right? Here's a company that had to raise over $11 million to approximate the technology that closure and closure script are giving you out of the box. If one of your goals in developing a production system is ensuring that you can always outmaneuver your competition, well, this is a pretty good start, right? This is flexible, you can apply it however you need to. But more interestingly, we're seeing that some of the advantages that we find in closure are translating to closure script, right? We're solving similar incidental complexities, but we're also solving a whole new set, right? Tossing problem domain data back and forth has real representation. So if complexity breeds complexity, well, closure script's a good way to start removing some of that. Well, let's see what kind of impact that has on a real project. So the system here is the exact same system between both of these stacks. It's a very simple web interface on top of a search application. There's role-based authorization going on on both of these stacks. The same team built both of the stacks, and there's great library support, obviously, for both stacks. So the first stack is written in Python and JavaScript. It's deployed on top of Nginx that's acting as a reverse proxy, so it's shooting out to a bunch of application instances. The server-side alone is 17 Python classes. It holds 56 functions. It took one month to get to a feature-complete usable prototype, and it took three months to ship a production system. It was later ported over to closure and closure scripts that's deployed on top of Jetty. The server-side alone is four namespaces and nine functions. If we throw in the closure script just for the sake of it, that only goes up to eight namespaces and 21 total functions, it took one week to get to a feature-complete usable prototype that we were iterating on in one month to ship a production system. Now, I mean, this example is not huge, but it is the baseline example for a web system you would build. It has roles, it has user accounts, it does some action like searching and showing search results. So this is definitely a great candidate to try closure script out on. Interestingly enough, the average cyclomatic complexity per function is consistently lower on the closure and closure script side. And at some points in the heaviest piece of business logic, specifically around roles, it's almost in order of magnitude lower on the closure and closure script side. So serious impact there. But we're seeing that any measure of sort of complexity that you take, how long it takes to grow the system, how long it, how big the system is, the maintainability of the system, the branching complexity of the system, it's consistently simpler on the closure and closure script side. And that has obvious benefits if you're shipping production systems, but there's also some not so obvious benefits to this effect. In a recent paper published called Software Needs Seapelts in Airbags, a study was concerned or wanted to investigate rather if Capers Jones's case study still held, if we're still seeing the same number of defects in systems. And despite our best advances in unit testing and static analysis, at least for Java and C++ projects, the defect removal efficiency before delivery is still 85%. So we're going into production with our systems with 15% defects still in there. The total cost of repairing the remaining 15% is approximately, on average, one-third your total budget and one-third your total schedule. The studies still found that Capers Jones's numbers are pretty accurate. And the study goes on to conclude that the major root cause to the remaining 15% is incidental complexities. For C++, that's manual memory management. And it was a great study, but what we're seeing is that the intuition and the insight in the tar pit paper, we're seeing real numbers behind that. This has real impact for production systems. And we're not going to change that 15% anytime soon. That's going to involve process change and tooling change and approach change. But we can make it 15% of 8 total defects instead of something like 15% of 80 total defects. And these are convincing reports, and the paper is very good, I suggest you dig it up. But the benefits have to be weighed against the risks. And there are risks to adopting ClojureScript. So ClojureScript is young. It's just over a year old by a couple of months. The development cycle for ClojureScript is extremely fast and really, really short. That's because David Nolan, I don't think he sleeps. And I do my best to keep up as well. And so if you work for an organization or you're on a development team where you need to lock in that dependency and it can't change, like maybe ClojureScript just is not the fit for you because you're going to want to keep updating. The changes that are important. You'll find that as you use ClojureScript and you sort of push it to its limits, there will be missing protocols or missing library support. That's the most common problem right now that you'll find is you go to use something and there's just no protocol implementation for whatever you're trying to use. Metadata tends to get a little tricky at certain parts. But that's super easy to fix, right? Because the decision was made to implement the majority of ClojureScript in protocols, you can fix that problem directly in your project if you need to. So there are ways around this. Highlighted at another talk was the exceptions. Now, I don't think the exceptions are that bad, but the caveat is you really do need to have JavaScript knowledge to understand what's going wrong when you see an exception. And there's no way around that right now. I don't think there will be. But if you really want to understand the exception, you have to really understand what's going on with the JavaScript. And closely tied to this, the debugging story for ClojureScript is fairly weak or non-existent in Europe. It depends on how you interpret that. But the best combination that I've found is a combination of browser REPL to hold some stuff together and investigate, and just using the debugging tools in the browser, so Firebug and Firefox or Chrome Developer Tools and Chrome. Again, though, in order to really debug your ClojureScript application, you need to have knowledge of JavaScript and the tooling around it. So you need to be aware of that. The interactive development that you have come to love in Clojure does not translate 100% over to the environment in ClojureScript. The browser REPL is pretty cool, but it's not going to be the experience that you are probably expecting. This is not really a problem in my opinion. When I develop a ClojureScript application, I would say about 70% of that application is actually written in Clojure using my Clojure environment. And only when I'm doing something browser-specific or node-specific have I switched my environment up to be ClojureScript-specific. And I think that works out great for me. You'll also find some oddities. They're not really necessarily problems, but there are oddities in developing ClojureScript applications. Like, the Google Clojure library is written in two very distinct styles. And it's not really a problem once you realize that there are two styles to the libraries, but there are two styles. And so you might get caught up in figuring out, am I dealing with a chunk of the library in style one or in style two? So those are the risks that I really think exist in adopting ClojureScript. And you need to sort of evaluate these trade-offs with the benefits. So you've got to go through some sort of validation phase, right? And you just have to ask yourself a couple of questions. And the questions are pretty simple. Like, whew, I don't know why that got dark, but is this good for the company? No, don't ask that question. What are the quality attributes that you're really shooting for, right? Is there some expectation for the latency of the system or how long it takes to get things back and forth in the system or the throughput, right? Or the encryption. Does something need to be encrypted? And if so, how strong and when? Does it need to be role-based authorization or not? And then what about the adaptability? What are the expectations on how often you need to change that system or modify that system? Or the evolution of the system, right? Like, how long does this system have to run for? Is it only going to be around for a year? Is it going to be around for 10 years? Those things matter, especially if you're going to be adopting a technology that's going to be growing as you're adopting it. And what about the system constraints, the actual functional requirements of your system, right? What types of data do you have to handle? What is the data specific to your problem domain that you have to model? Or what interface are you expected to build? Is there an API involved or not? What about deployment? What's the story around deployment? Are you limited to only certain types of technologies, right? Can you only use certain types of technologies in your organization? And then just take a huge step back or fall into a hammock of your choice if you won one and ask yourself what's your biggest problem? Is it between you and shipping something today, right now, and really capture that, write that down and write down where you're wasting most of your time and most of your money and where you're making most of your money, right? If money is your measure of success for some people that might not be, but where are you spending your resources? And where are the areas that you're sort of uncertain about stuff or you seem like it's fuzzy or you're a little scared of it or whatever, right? Write down where the risks are happening because if you don't double face palm and you don't want to let Picard down like that, and Riker, poor Riker. And then just ask yourself why, right? Why would I use ClojureScript to solve this problem? What does it give me? Why not use Technology X or something else? Find that trade-off, find that balance. If you have a hard time answering why for ClojureScript, flip it around, answer why not, use the power of inversion and see if that helps you find those answers. You need to ask yourself why three times in a row. This trick works great so I'm sharing it with all of you. So ask yourself why three times and the answer to that third why is the real answer that you're looking for and it should loop back to a quality constraint. I will give you a real example. So why ClojureScript? Well, I need namespaces. Okay, well why do you need namespaces? Well, namespaces help with organizing, managing and sharing code. It's also a lot easier for newcomers to come along and sort of see how the project is connected and sort of work through the project. Okay, well why is that important? Well, our biggest pain point is maintaining projects. We don't have a problem shipping projects. We can ship projects but our projects have to last a long time. They live for over 10 years, right? We're constantly rotating teams and rotating on projects so we really need to optimize for ramping up on a project. Perfect, those are all quality constraints, right? Those are all quality attributes you want to shoot for. This is a real reason for adopting ClojureScript. If you don't get to that third Y, it's not a real reason. Start on something else. So do that, right? And that should sound pretty familiar. It's just a slight different interpretation of the hammock-driven development, right? You're just asking yourself these questions to really validate your understanding and your learning. And let's see how this gets applied in a real system, right? Let's take a hypothetical architecture of that case study. So here's a really simple box inline architecture. The architects in the room, if there are any, are like crying that this is not a standard diagram or whatever. But it gets the job done, right? So a request comes in, it hits this web server that's acting as a reverse proxy that's shooting out to a bunch of application instances. Those instances are fetching data from a DB or a datomic or whatever. It doesn't really matter. And then they're mashing it together with a template to build some responses and those responses are going back. The job done responses are hitting clients except that the traffic for this site is extremely spiky and the spikes are not predictable and at the top of these spikes of traffic we can't handle the reload, right? It's probably due to this unnecessary dynamism. The data's not changing that much and the templates aren't changing that much but we constantly have to recalculate all of these templates for every response. The biggest pain point, the biggest problem is this. So these designers that we have generate this beautiful HTML and this beautiful CSS and then it gets shipped to us developers and we have to spend all of this time churning through it to turn it into a template and lace all this template logic through it before we can actually use it in a production system. So it's a big pain. We waste most of our time on constant SEO updates from business operations. Business operations always come into us banging on our door saying what are SEO purposes? That's because that's where we're making most of our money also. We make our money from organic search leads coming into the site. A year ago for this hypothetical site, mobile traffic made up 12% of all total traffic on the site and currently at this point on the hypothetical site it makes up 25% total traffic. So mobile is growing extremely fast. It's the fastest growing segment but we're not really doing anything about it, right? These are our problems. These are the things that we really need to solve. So what do you do? What's the first thing that we do? Any web developer that sees this problem just says like, oh, well, let's just cache. Let's just handle the read problem for right now. We'll take all the static resources. We'll shove them into a CDN. We probably should have done that to start with. Anyway, we'll go ahead and cache some stuff. That's really helping, right? We can handle all of this read load now but we still have that design to template dance. We haven't even done anything to approach that problem. And there's still constant SEO updates. Mobile is growing faster. We're not doing anything about it. Even though there's a cache here that unnecessary dynamism problem is still there, we're just bandating it with a cache, right? We're just sort of masking it. We're not really getting around it. But whereas we've introduced a new problem and it's a pretty serious problem, we've introduced serious cognitive load. Every single time a developer has to jump and think, oh, is this defect happening because of the cache or not? Or does this new feature need to invalidate something in the cache or not? So let's take a step back. Let's totally scrap this. This is nonsense. We're doing things that don't even make sense. Let's take the lessons from our community. What if we just assumed a mutability, right? This is like Rich's number one. So what if we assumed a mutability? What if we assumed the entire site? It was completely static. During the baking phase, we pulled in all of this SEO content from a third-party data store. Since we're on closure now, let's just assume closure and closure script. Why don't we use something like in live, right? Then we can take the raw HTML and the raw CSS. We can go ahead and use it directly from the designers. So that goes directly into production. Since everything is static, we can shove it into a CDN. We can change the DNS A name record for www to point to the CDN. So now almost all of the site is being served directly out of the CDN. But we've introduced a new problem and that's user experience. If you saw a site that was static and littered with SEO-specific content and you couldn't really do much with it and it didn't feel vibey, you would know. It would be a terrible experience. So how do we fix that problem? How do we fix the problem where things actually do have to be dynamic? Well, we can use closure script and we'll use in focus on closure script and reuse all of those pieces of HTML that are still living in the CDN. And then we can make parts of our site dynamic. So this is pretty cool. So far we're fixing a lot of these problems that we have. And during the baking phase, we can also optimize for mobile. We'll just go ahead and do that while we bake. We fix that list. But we can do a little bit better. Let's pull things apart, right? Here's lesson number two from the community. Pull things apart. What if we just pull that search server right on out? We can get rid of that and the results will go directly to the client and we'll go ahead and reuse all of our dynamic closure script pieces and incorporate all of that. So this is all closure data now, right? My server is only going to serve up closure data for dynamic stuff. The search server is only going to serve up closure data to be incorporated. All of the problems go away because I'm choosing immutability for my site and we fix all these problems. But we get a lot more too, right? This new decision can handle more reload. It can handle more write load. It was cheaper to operate for the hypothetical situation and it was much quicker to modify. So we could incorporate changes a lot faster. The outcome is clear, right? Question your habits. Question what you're actually doing. Investigate history. Assume immutability if that helps and apply it. Whatever that means for your problem, apply it to the context. Pull things apart. Question what can be done here and what can be done there. Suddenly we're playing around with the relationship of the dynamic logic being directly in the client and only serving up closure data and that's very powerful. So let's go ahead and just dive in on this and we'll see how the code might be organized, right? So I'm going to advise for the majority of applications we stop choosing directories like CLJ and CLJS because they're not telling us anything, right? Let's be architecturally evident with the way that we're organizing our code and let's still apply the same good practices that we've previously applied and be resource oriented. So is that font super small for the majority of people? Yes. Sorry about that but it's locked in and you can read the slides after hopefully but we should be architecturally evident. The more you blur the line between the client and the server the lower that cognitive load becomes and the more that you can use your code organization to infer exactly how that system is holding together again that cognitive load comes way down. So if you look at just the top directory listing if you can see it, it says API, client, config, controllers routes and server, right? If you just saw that directory listing on a brand new project you would have a pretty good idea on how that system probably holds itself together. And then if you look at the full tree you'll see that the client has flows, main, search and a view directory, right? So flows is symmetric to routes, right? Flows declaratively open up reactive data flows for the client side just like routes sort of map what's going to happen on the server side. So we're seeing symmetry between these two things. Business logic on the server side is obviously tucked away into controllers and business logic becomes first class on the client because that's really what you're doing. You're doing raw business logic and manipulating data on the client. So I think this is one approach to code organization. Again just apply the same techniques that we're used to, right? Be resource oriented and that will get you pretty far. Another thing I really like when I'm developing ClojureScript is the use of protocols and this speaks really to the power of ClojureScript's implementation and it was such a well done design choice. So we're all familiar with protocols for those viewing at home. There's a wiki page. Read it first, pause the video. And here's why protocols rock. If you hit a bug, I said this earlier, but if you hit a bug in ClojureScript it's probably because there's not protocol support for what you're doing, right? Like maybe something doesn't implement ISEAK or some printing function or whatever, right? So you can go ahead and essentially monkey patch or fix that bug in your own code if you need to, right? You can always put that work around because the control is inverted, right? You get to decide who participates in a protocol and how. But this has great impact for the rest of the system. What if you want to make local storage look like a transient map? You can do it. You just have to extend those protocols to local storage and if you extend the protocols for printing and reading those things to local storage you can start shipping local storage around back and forth between the server and the client or multiple clients, whatever you want to do. Or what about cookies, right? Cookies should probably look like a transient map as well. So you can go ahead and do this, right? Let's go ahead and make cookies look like a transient map. In fact, this is what one third of the shore leave libraries actually do. It's just all these constant protocols that we ran up against. Like I don't want to actually deal with the Google closure libraries the way they're written. I want to write closure code and go ahead and use higher order functions and map across stuff. And I want to ship things around back and forth. Well, you can do that because protocols. And protocols are also really great at just capturing the core abstractions that you want to model your system in, right? The lesson here is shore leave's pubs up. So shore leave specifies two protocols, one called message bus and one called publishable. If you fulfill either one of those protocols you can go ahead and define whatever kind of message bus you want for your system and anything can participate in the pubs up system. So you can publish from function to function, function to atom, atom to worker, worker to function any combination that you want. Those are the ones I've provided you but you can go ahead and mix and match whatever you want. And then you can start building these reactive data flows. Just because you have two protocol specifications. What falls out for the quality attributes of the use of protocols is adaptability and modifiability. People are often trying to go after that when developing their systems. But what falls out concretely in the code is that you have very loosely coupled code in that the control is inverted. For instance, in reducers you don't say, I know how to fold all of these collections. You go to the collection and you say you fold yourself. If you fulfill that protocol you go ahead and do it. So this has profound effects on building actual systems. Another tip I would suggest when building closure script applications in production is if it's too hard to do in the client go ahead and fall back to the server. I definitely agree that Ajax is sort of a hack, right? It's sort of like a something you need to carefully consider. But there's real power, again I apologize for the small fire, but there's real power and having the ability to at any moment that you hit a roadblock in the client you just go ahead and fall back to the server. So the example highlighted here is we needed to send emails to a bunch of people and the email addresses are sort of encoded inside of this page that we're looking at. So how would you do this, right? Well, it's easy. I'm just going to go ahead and grab all of those emails on the client side and then ship them over and because of macro magic this looks like a local function call, right? So what's really happening underneath is that there's CSERF protected and role-based authorization to send emails. So that's closure data to the server. The email gets sent out through MailGunt. The response comes back as closure data and we treat it like closure data. So I'm sending a map closure data. I'm sending that whole thing over to the server closure data. Everything comes back closure data and I can react on it. I can do whatever I need to do with it. So be lazy. If something's too hard you either need to extend a protocol or you need to fall back to the server and get the job done. That's just my advice if you want to ship real systems. Also this isn't JavaScript. I think too often the trap for closure script is that people want to apply all of these technologies or all of these lessons learned from JavaScript to closure script and closure script is more closure than it is JavaScript. If you treat it that way your solutions will be better you'll feel better about yourself in the end. You need to treat it like closure. We already know the problems that exist in JavaScript. Let's not repeat them. So I'll give you an example. If you're developing a node application you're constantly dealing with callbacks and one potential problem with callbacks is that you stop programming with values and when exceptions happen they're not happening in the context you actually expect them you want to handle them. So promises, right? Stack promises at the front of your node application figure out all the things you depend on and then program with values through the rest of your program and that has worked out great on node applications. In the browser I try to throw callbacks as far to the end of the calling chain as I can and so much so that I will use the threading macro and basically build up all my values and at the very end there's a special callback that does something. That is a great way to get around this. Program with values, apply all of the lessons and the techniques you use for closure to closure scripts. And when you're not doing that question why you're really not doing that and start to shape your program in a way to allow you to do that. So here's the quick tip list. You know, these are all the things that I sort of said. We're not going to go through them again. But I do want to say one thing as I close and that's the more people adopt closure script the better and better it's going to get and the more sort of problems we're going to solve with closure script it's really important for participation to happen. I really firmly believe in closure script and my goal in doing this presentation was to give a slide deck that somebody can take to a VP of engineering or to a decision maker and say look here are some hard numbers, here are some metrics, here are some techniques maybe we should try closure script and I really encourage you to do that whether it's a hobby project or something in production. So without that I've got a couple of minutes for questions, comments, concerns anything. Yes, one of you. Yeah. So the question was there are two styles in the Google closure library can I give you a description of them. They revolve around how constructors are made in the naming semantics inside of the Google closure library. Again those are not hard problems to get around but you need to be aware that for some files the constructor is named the same as the file name and is a top level entity. So it gets weird when you start requiring it or you start needing to use it in the system. The other style that's buried there's usually like a making function or a generation function and there is no top level entity that's going to get in the way of a require statement for the constructor. Yeah, the next question. So the question was when you start pushing more stuff into the client you need to start versioning the APIs that are being exposed to ship closure data from the server to the client and specifically like that the data might change throughout specific versions. The question went on to say is there anything in closure or closure script that makes that easier. I think closure as a data format naturally helps with this problem but also I find in the hypothetical situation again that it's easier to get around that by versioning your CDN correctly and so you serve up CDN version stuff and that CDN version stuff calls the server stuff. Again, for the baseline web system that you would write it's so small in closure and closure script that unless you were developing a gigantic system of like over 10 namespaces I would just leave all of the API calls in there and support all the versions but unless that has serious impact on your system. Question way in the back. Yes. Yeah, so right, so the question is if you have a lot of legacy JavaScript the story becomes very different for adopting closure script. Yeah, it definitely is very very different for doing that. I honestly do not have a good answer. My answer was that seemed so daunting and backwards and I wanted the semantics of closure that I just tossed the JavaScript stuff away but if you looked at how small that project was like tossing it away is not a big deal. It took me one week with the team of three other people, one week, to get to feature complete. I had feature parity on both systems. So unless you have a huge JavaScript system that you want to replace with closure script which you may or may not want to do, right? You've invested a lot of JavaScript in it. It's probably hairy and it's got a bunch of weird hacks in it but I would I have no good answer for integrating closure script into JavaScript. I would just not touch the JavaScript. It's like a disease. Any more questions? Yes. How do I handle differences between browsers? This question got asked at the bar last night and I felt like my answer was more colorful because of the alcohol but how do I handle multiple browsers? Google Closure does a pretty good job at handling that problem as long as you hit the right libraries that you know reach far enough back to whatever browser you need to support. For certain applications this should be one of the concerns on the slides. For certain applications we just said no IE. We'll use Chrome frame, no IE and that was our way around it and that was acceptable because we knew our customers very, very well and we knew that was an acceptable thing. Our market share is very small in IE and putting Chrome frame into IE is an easy solution for us to get around but it really depends. This is one of those things that you have to balance yourself against when developing closure script applications but the Google Closure library helps out a lot. Yes, that is true. Yes. So yes, the core library is completely cross browser but if you're developing a serious application you're going to want to bang on probably HTML5 things and that's when you get into some hairy issues. It's trivial to do. The question was what is the interrupt story with other JavaScript libraries? It's pretty trivial to do. In some cases it's writing an externs file for the Google Closure library and then just creating the right abstraction that you need inside of your code to call the actual JavaScript code. It just turns into regular JavaScript calls like you would do for interrupt stuff. I tend not to do that a lot because Google Closure, the library, gives me almost everything that I need and then whatever I don't need is actually just me writing protocols to do it in Closure, the Closure e-way. Any more questions? I have maybe one more that I can take and if not that's totally fine. Thank you very much. I'll talk to you later. Thank you very much for those new class.