 Hello, welcome to Meta Refresh. I know it's been a long day, but it's been great, so it's been great for me and people. I want to thank Hasgeek for putting this together. Just like the last time, just like every last time they were all coming, trying to make sure that it happened today. I thank them for figuring out what's possible at Hasgeek. Oh, I'm sorry. This is a small workshop. The workshop doesn't come today. We'll be talking about the front-end bit process. Historically, front-end developers essentially just written that forward and go to the back-end base to deploy it into production. And it's been the back-end base not to figure out how to include it on their HTML pages. And the back-end guys don't really get that much of a commitment, or at least it is really reduced. This is obviously back with JavaScript as a trial language. Of course, these have become a lot more serious now. Giant applications. All those desktop-calibre-level applications are being made with HTML5 CSS and JavaScript. And a whole bunch of things are happening in HTML5 and CSS3. But now it's come to that it's now our responsibility as front-end developers to be professional about it and figure out all the details by ourselves. How are you going to serve it to the browser at optimal speed? What's the caching required for this? How are you going to make packages? How are you going to optimize for different page loads? Some pages might want to set you up for a little bit more. How do you optimize for things like that? That is what this talk is about. It's about the front-end bit process. It's called the last mile because you have to understand that the code that you write is not the last step on its way to becoming part of the product. You have to package it and document it and test it and get it to a place where it's seamless. It shouldn't be part of a daily order. It should just be running in the background and it doesn't matter whether you change the file. Anyway, that's the idea. The point is that we have to go with the last mile in deploying our front-end assets. My name's Pai. My actual full name is Neel. I'm exploring all those things by... I think I might be seeing a couple of them somewhere. My friend's called me Pai and I worked at a bunch of places. I'm not really proud of having worked at a bunch of places. I'm not mastered very well. My point is that I work with startups, contracts, development of India and I've seen good code, I've seen bad code. I work at YAML right now which has the combination of the best startups in the world and don't ask me which is mine. So sometimes there are a few things that are common to the front-end build process amongst all of you guys. At least for the people who care about you, who care about you. You have a good product out there. Let's talk about you. The assumption is that currently your front-end build process sucks. If it doesn't, who knows? You guys have given much more effort than about 90% of the front-end programmers out there. But anyway, your boss is probably old school, from the early 90s. He believes his entire stack is Java and YAML is a skin on top of it. Or possibly you have a small school who keeps talking about all the great things that his company's going to do for the world but doesn't really care about the product. In any way, point is that there are a lot of places where quality of your code can suffer and the quality of your deploy also. There are a few common use cases. For example, you are not doing any friends. You are essentially serving straight out of the directory. You have to understand that you will see how that's kind of wrong. But the point is that you're serving raw source, comments and all. There are various methods and obviously don't have any discoverings. How many people care about JavaScript? There's more than the last time I asked. Three people this time. When it boils down to a JavaScript, it's now a serious language and we better handle the test for it. There are no ways about it. I'm not going to defend it anymore. If your code doesn't have tests or coverage, then I don't believe that it works. And no matter clicking on the dropdown is going to fix that problem. You probably have no logs. In the last talk, there's no real user monitoring. Not a lot of people do it. Not a lot of people know how long it takes for the average person in Australia to see your page for your page to load. And without this information, this is all very useful data that you should actually involve it because you can use it to make the product better. And if that's not a good reason, I don't want this. Let's start at the base. What is a build? We can learn a lot from the server side. The server side ties, back-to-back transfers. Basically, it consists of a few steps. These are what I believe are important. One is compilation. Compilation essentially means, let's say, you're at C++ and I get translated to machine code. You're writing copy steps and you're writing JavaScript and it has become compressed and minified JavaScript. You might want to run instrumentation on your code. For example, in JavaScript, you might want to see what the code is. For that, you need another build task where you actually run all your JavaScript through sub-processing which instruments the code for you. There's optimization. This is not just a build processing. Of course, this isn't just about g-loading your files. It's also about thinking about how it's going to get deployed on your HTML page. I'll be making sure that all your CSS is on top. I'll be making sure that all your scripts are at the bottom. What happens is somebody clicks a button before your job loads. So you have to figure out all those details and ideally those, again, should be automated. As much as you can. And then a deploy. It could be as simple as moving from your loaded desktop machine to the server. Possibly FPD. You could be using Git to deploy. You could be uploading it to S3. I'm not S3. You're somewhere else on the cloud. And then you'll find out what the UR target URL for that is and use that instead of reference. In any case, all these things, you might want to run test cases. These things comprise a page, a single page. When you say I'm firing the page or I'm going to run my continuous integration page, that means you're running one set of tasks which converge your source from what you've been working on to essentially something that's ready to deploy on some target. Might be on development, might be on production. Somebody came up to me and said, Oh, in the last two weeks I've written about three different custom page systems. And it sucks. I used to tell people that, oh, it's awesome. Maybe you should try writing your own grade system because you'll learn more about it. Which is true. You know it for yourself. But there's a whole lot of work that I've gone into packages. A lot of people started with me at least. And you can use those things. The idea is not to build your own build system, but to make a build file, a build process that is fine tuned for your product or for your company or for your website. So you take these tools, you choose the ones that you like, the ones that you're comfortable with, mess around with it and make a build file, which will get you right out of the giant demo. In any case, there are a whole bunch of key solutions. These are tools which try to do a whole bunch of things for you. My absolute favorite is Jamet. It's a Ruby-based package management system with compresses, it deploys, it's very images and data-oriented CSS. It's very, very sexy. So in Rails, my mind also does a whole bunch of the same things. If you guys have any Python developers here, Django might be a problem. So yeah, Django compress, I've used it in a previous project. I think the Django recommended one now is Django static files. It's a little too rudimentary for me. Basically, it's just sending up the same files, but with caching and so on, put on top of it. On Node.js, there are a whole bunch. You can go to the Node wiki and check out the list of modules. One that stands out is Builder. Builder is really nice. It does JavaScript, CSS, packages, things like that. And Sangam is shameless plug. I actually wrote that. It's not too bad. I don't use it in production, but it's what I did to it. It's not learning about it. Learn from a lot of people. For me, the most enlightening moments about learning about build systems were going through server-side processes. To understand how they break up tasks into, you know, two of the things that are built, they won't describe the entire build in one place. I say, okay, take all the tasks out. I take all the files and files, I compile all the files, instrument them, and so on and so forth. But my point is, stop your server-side. What do we do? One is, they have a wealth of knowledge in this domain that you could learn from. Second is, eventually, they're going to have to they'll be serving. So it's good to write up in the beginning saying, you know, I'll give you an output for KC5 because you'll have all the aliases for files and things that you want to replace it with. A worship Steve Souters. SteveSouters.com. In my description, I'd say that I'm going to show you numbers about why it's important, our friends and stuff. But that's just it. My entire slide would just say, SteveSouters.com. Go over any heavy articles, see all his thoughts. Loads of good stuff that he's, essentially, of the authority on it. Start profiling your own stuff. Take the last one such that you could open it up and open a file, see where all your scripts are loading, see if you can do anything. Essentially, you practice and you become better at understanding the patterns when it comes to custom projects. There are a bunch of tools. Of course, Weislow is the popular one. There's page 3 from Google. There's SpeedRacer, which is not so much for loading of assets, but actually for checking out how it does this part and it makes you do it. You see where it is, which part of the there's CPU is binding upon. It's pretty cool. So we actually spotted a couple of issues with our code. We moved it to web workers. That's all our problems. So SpeedRacer is pretty good. It uses pro-modified versions of it. Moving on. This is one of my slides from my last talk. Own the browser. The browser is yours. You're the front-end developers. You should not expect the back-end guys to do anything for you. And in fact, it's the on-right answer question for you to expect them to listen to. This is what you want to get from that. Just like I said in my last talk, your code is your responsibility. If somebody finds that it's broken, and it's because you didn't write a unit test case for it because you are lazy, or because you don't know how to monetize it in comments for a while, you don't want to put it on. Now, sir, it's going to work anyway. You own the browser. It is your direct responsibility. All the way until the first file that the HTML is writing for the back-end guys. Those guys have given. It turns you into 0.019 seconds. We have to make sure that we match the kind of efficiency that the back-end guys have and make sure we do our very best to send the answers to the browser that's easy and as fast as possible. My point is being responsible. A couple of basics. Most failed files, they consist of a few common operations. File, read, write, for example, you read sources and you write a combination of them to another file. Directly travel, search all JS files right under the site and find everything in this pattern. Take all those files to something like that. The idea of environments. This is quite simple idea. You don't even have to explain this too much. The idea is that your local machine when you're developing, when you're writing a line for text mail, when you're writing your own code. That is nothing like production environment. But you're obviously running the core code for this. So the idea is to understand that your code might be deployed on a number of targets. For example, development. Same, which might be delivered to the company. So you might not have loaded the assets to the CDN yet. It might simply be posted locally. Production, production is also simple. Make sure you have a monitoring base to make sure your JS files are 10% smaller and so on and so forth. The idea is that you should be able to specify environments as configurations. For example, compress, in my development project, I will compress as well. But for production I love this too. Some tools for build systems. These build systems basically give you the ability to make tasks. You know, you can say create a new task called link. And in links, you'll write your own code which will pull the code from the files and then return as well. And you can return the results back up to the build system. Rake is popular with the Ruby guys. Jake is becoming popular with the Node guys. In fact, leaflet I think is interesting. There's obviously make, there's ant, there's warp for Python guys. For what if one might use none of these? I use the same Node script just to show you how you can write to it. But this is work that has one in it. Pick your platform of choice, choose it and write your own code. Compressors and validators for JavaScript, I usually use Uglify. For what it's worth, you guys can write down just the names. It's easy to Google. Anyway, so for JavaScript, I use Uglify and there's YUI Compressor also and Closure Compiler. Closure Compiler is cool. It actually analyzes your code and you can check your code files. That's cool. For CSS, there's CSS meant that's also from essentially it's a fork of YUI Compressor. CSS would minimize it. For images, suppose I want to remove, make sure that every PNG that's been termed as any of the meta-information is super well useless. Data, you can use PNG crush. Image magic has a huge utility library of stuff you can do in images. Am I going too fast? You want me to shut up? Have I said anything wrong? You know? Not yet. Not yet. Fair enough. You can stand up and say, you lie. Anyway, so for validation I use JSON, W. Stockford. CSS Lint is this new thing out by, I think Rebecca we participated in it. It is really strict and you'll feel really bad. I mean, if JS Lint made you feel bad, CSS Lint is really bad. When it comes to HTML validation there's a problem, especially when it comes to dynamic applications, because obviously you have a whole bunch of tags and they're not going to be standard HTML. We have tried a couple of strategies, but then HTML validation didn't really matter for us that much, I guess, because we were writing with template languages. But the idea is that your worst case scenario, what you can do is run the working server, serve up whatever the application is and then you can go to those three and try it as something that's really new. I think it came out like maybe for yesterday. It's crazy, you can just mention images and it'll generate this file for you. If you're writing your CSS with the side assignments like SAS or less, I'm sure there will be a port for less and SAS eventually, but it's great. I just tried it out yesterday. You just mentioned your own icons and it might generate a nice file for you to generate all that jazz, really killer. Retina.js is used for serving alternate assets for the iPhone Retina display screen. It chooses lower resolution if it has it all. It's ratio is 2 instead of 1 because of the IDP. Stylers, lessons, SAS. Did any of you guys go for the SAS? No, the SAS focus next. Wasn't there a ninja CSS? We went for that. What did you use it for? So these are great. They make your code really maintainable. CopyScript is the new dialect that you can use for your iPhone. Check it out, it's very cool. There is one specific problem with CopyScript that nobody really talks about. If you start writing a project in CopyScript you have to keep writing in CopyScript. And it's becomes kind of a pain. That means your entire team has to learn the language, even if it's just you who likes it. But it's beautiful. I'll give you that. If you can get your team to convert to it, it's gorgeous. Enderjs is awesome. The only thing you can do here for Node is package management. So Enderjs tries to do that for browsers. You can say enter back, enter back, and it pulls it from a database and just makes you a script. So you just use root, ender.js, and you do it. It's pretty cool. It's like a lo-fi method of dependency management, but it works. For testing, again, I'm not going to go into too much detail about this or drag my finger and say, you better write unit tests for your code. Do I look sufficiently common? But write unit tests for your code. Get good coverage. The kind of thing that keeps your boss happy. You can use Mocha. Mocha is great. Mocha is by T.J. Hallway, Chuck, very prolific 22-year-old. YUI test is awesome. And it has a great little tools ecosystem around it. I'll talk about that in a bit. JS unit is one of the older ones. Then there's Jasmine from Pivotal Labs, and some other people use that across the world now. That's pretty cool. Platforms to actually run your unit tests. If you don't want to be the guy who's like, okay, I'll do the unit test, which means you're going to open every HTML file and other tests. So it has to be automated. So some tools for that are getting jimmed, JS test driver, and Selenium. These actually pop up a browser for you and run the tests, collect the results and let you know about it. PhantomJS and CasperJS is my current favorite way of running unit tests. The command I have basically opens up a headless browser which means you take a look at the window. It's about 1224pxy and you can see what you want. It runs all the unit tests for you and just collect the results right there. No browser for nothing. Which means it's great to deploy for CI. Chai.phantomJS and CasperJS. Very cool. Tools for coverage. I know about Node.com and JS coverage. I haven't really experimented with too many others. Oh, of course that's YUI test coverage. Which comes with YUI test. It basically takes your source files and puts a whole bunch of links that's why you need to try the code with instrumentation. That's how you do it in JavaScript. And that's why when you actually run your code it keeps a track of what our lives have in this period. That's what we do by complex JS. So, let's say you write this randomly to me. Now get config.x is going to be 23. Then do this. Basic problem with coverage is that it will also be kind of where you will never hit the object. If you are saying age if age is less than 0 then use this code. But age is obviously never less than 0. The idea is that you need to make your code and once you run your unit test over it that means your unit test that section of code which was run. That's coverage. That's not that coverage. It basically means that much of your code was actually tested. 75% reliability that your code will work. It's not exactly that number. Don't overdo it. That's my point. Don't start writing unit tests just to get 100% coverage. That's the easiest thing to do. Just go through all your standard cases. Try it for functionality. Test for functionality. Make sure if I set map, center as the center of the testing then I should actually go over there. Stuff like that. That way every bug feature gets a unit test. How I write code now? What the bug comes in saying that this button doesn't do what's expected. I first write the unit test which fails for this year. If you click on this button then it will do so and so. And then I actually write the code and then the test passes. That's the basic idea. That's the basic idea. Once there's a bug, if you write a unit test then it can be calculated or if it occurs again your unit test will be very complicated. Compilations are things you might want to do in your build process. You might want to generate sprites using stylus sprites or pick it up from some other source where you are actually signing the livenings and stuff to you. This is pretty cool. Jamet actually does this by default. You should check it out. Your images can be embedded directly in your HTML or CSS or JavaScript. Data URIs basically takes the binary screen encodes it to basic C4s and just comes with that. And your browser? Most browsers except for the one browser that you know I'm talking about support set. In ID you have to use mxtml but there are a couple of bugs around that but please go and investigate this. It's a beautiful thing. It's how Apple managed to show images in Gmail. Do you remember there was a period of time when Apple promoted the mail sure the images even though they were embedded in the entire data. It's been done a whole bunch of times but it's really cool. I love this. You might want to see about making sure your call script is nice and modularized using, I don't know, require or even just separate files. So that way you can make packages. You can see if my main page package is I want J3, I want underscore, I want back both and I want that cool animation. You can specify files and that's your package. Asynchronous module definition, common JS, YUI those guys increment module systems where instead of saying require xy3.js, you'll say require module xy3 and somewhere else you'll mention that module xy3 might be 5 to 3 files so for 5 to 3 files pass it and there's no time limit. You basically modularize it. Get it to a place where it's not just .js files and more. They are living bringing objects. The multiple subdomain hack was spoken about in the talk, a couple of talks before this. Turns out that browsers have this limit on number of parallel connections that can be done to a browser, to a server at a time, I think it's 8 or something like that. But there's a need for the hack around that. If you start serving from multiple subdomains those are counted as different things so you can actually have gain coming from the same one. I think too. 1.1 but ideas that should be able to serve assets of several subdomains or separate targets. I don't know. You're obviously loading all your jQuery problems to Google API side. You can inline your css or javascript. For example, if it's a home page but nothing is changing you can remove all your other features and just inline the css and the javascript. This is not to say that this is right to your css or javascript in the html. You just say whenever you're talking about script type, I point to this just pull the script itself and talk it over with you. You can put that behind a caching layer and that's basically one request. I mean, obviously you'll have your Facebook buttons and all that work. That should be fine for them. And obviously you use CDNs and set your caching on high. I'm not going to talk. You need me to talk in detail about caching because the headers have the ability to cache the data like images or files and when you're actually sending the data from the server, for example you can specify days on the headers that you set but how it is to be cached. For example this image will never change store it forever. Store it for like 30 years. But this files will never change. You can see this by experimenting and say something like Facebook go to facebook.com, load the page and check out your file you'll see a whole bunch of them are really big, for example 400 I suppose but none of it has been loaded. It says loaded from cache. That means it actually has saved from previous time and it's loading them. That's caching. So you have to make sure that when you upload to your static server may it be local or wherever else in the cloud or in your own data center you make sure that you have your static assets tuned to go to the target and see that for as long as possible. It leads to a couple of problems which I'll get to in a bit client-side templating you guys know what templating on the JavaScript side is it was like handlebars embedded JavaScript and underscore.templating if you have static developers obviously know what this is for. The idea is that you can write a template and download data and create part of it and generate html. What is this for? And in terms of that we can use this templating on the front end as well. For example, let's say you have a different Twitter page, you still have to construct a html. Now the old way of doing it would probably have been you know, open your tag and close the code plus this is on the bottom, construct the html and so on and so forth. But now we can use very strong template clients directly in the browser. So you'll say get the Twitter feed. This is the template. If I dump all the data I entered in the html required for to show the view, that's templating. So client-side templates are awesome except there are way too many people now including it directly into the html. I'll say this way I'm doing an idea of one so I'll just pull the html of it. I'll show you a much more maintainable way of doing this now. Logging. Again, if you guys have around 2 talks about the only way that you're going to make a product better is if you listen to your customers or at least you try to find out what problems they're having or you can go on log on the html. What instead you can do is you can actually start tracking the movements of the user on the html itself. For example, there's heatmap.js what it does is it deletes the heatmap of where the user is and moves around the page where he's spent the most time. If we don't have, as front-end developers we have to understand that this is our job again. If we can't be a guy who as you imagine that, no one else is going to do this. This is truly a front-end developer job. If you don't start putting little beacons on the page if you have something wrong with the program, for example, but any logging service that can log back to the server if you don't you won't get to a place where you know how long it takes for somebody in Australia to log your baby and most of the time you're getting a bunch of money from if you fix that problem, you get no money otherwise people start saying it's not a good idea. So start logging all the important stuff and save it to clients and database Continuous integration. Does anyone do any CIO on their code back and forth on it? So you guys use code check-ins and check-ins Continuous integration. Yeah, that's it. The idea is that every time you make a comment it doesn't it adds sand to make sure that everything is okay. That's the basic theory. It could happen on comment or it could have come by this order. But the general idea is that there are four people on the team every time one person makes a comment there will be a machine sitting by the side which runs through all your movement test makes sure nothing is broken and lets you know that it's in your name and your manager gets the information. But yeah, continuous integration. It should really look like an integration. Again, it is absolutely no reason that client developers can't make use of this now that we can run unit tests directly in the command lines without having to talk to the members. Can someone tell me how much time it takes? So I wrote a small parent system. What did I write a parent system for? A website. I noticed a couple of things that I think they might be doing wrong and implemented my custom based system on top. A few of the tools that I used I'm not sure if the camera is going to get this far and say for file traversal I used three tools. One is blob. It lets you say something like xysslash starstarslash star.js and it will return your list of files. It's quite cool. That's all it does and it's quite nice of that. Async because I was writing a node and it's called back and I'm writing with JavaScript. I used async to use promises and it helped out. I'll show you the code as well. I used wrench for directly traversal for creation. For example if you want to make the directory a slash b slash c you don't want to really try to it says must make a then make a this wrench is a bunch of utilities that makes it easy. For command line I used commander. I could have used j or rape or not rape but j but I decided I wanted to get more closer to the menu. Commander is something that helps you create a menu list of options for the for example you can write something like project.databit where you do a bit production. I used that for that. Using js lint and css lint for linting. For compressing I'm using uglyphi and node css compressor which is basically css min. I'm doing some small micro templating. I'll show you the code for that later. I write basic html files and those get compiled into JavaScript. It's quite cool. For testing I'm using YUI test Grogger and PhantomJS except that because the internet is not cooperating with me I might not be able to give you a proper demo for that but I'll try. I also used child process.exec. This is for utilities that aren't for example js lint is an executable. It's not a node package that you can include. So I use very regular child process.exec. It lets you execute commands and you get the little code and it's here. This is the code. That's your code, yes? That's your code, yes? Excuse me? Are you guys white background? White background, sure. Yeah. Command control what? Just hold on. Oh and white background you were saying? Is that better? Guys stop being weird children and let me continue. Sorry. Anyway, so this is a meta refresh. It's a standard. I think it's a flash cap and this is where they keep their models, templates, views and static. Static is actually where they have the static assets. CSS, IMGJS, I made a folder of the current templates. I'll show you that code but in any case, what I want to show you is I made this little folder of the front end which consists of all my system stuff. I wonder if I can make this bigger. Is that a little better? So I have this script right here and let's see what it does. Let me do a bunch of things. The first two are very regular. That's fine. You can set an environment. For this build system, I'm assuming I've spoken it over with the Haskeek guys and we have a development staging and production environment with different conflicts which I'll show you. So I can set that environment whenever I'm running this thing. Clean, clean is the entire build folder. Very useful when you're doing a fresh build. You want to wipe it out and start a fresh. Or for example, you don't want to commit build files to the end of the script or whatever. S actually runs a little development server for me. The idea is that this isn't just for packaging assets. I should be able to work on my code without having to run the middle of a fresh site. I just want to, what about my asset? Make sure it's working. So I can run a little local server for it. The speed does a regular build. Assets takes a few points of specified assets. For example, you can mention images and so on and so forth. Push it to target directory. Lint runs JSLint and CSSLint. Watch does a fresh build every time a source file changes. That means you don't have to stop it and do a restart on the server. It could be encoded and just starts it. It does a fresh build of all your tasks. Let's take this for a spin. Let's say clean. My target directory is here. Let me walk you through this quickly. Like I said, the static directory is here. This is what I'm considering my source did. All the stuff that I'm going to package and so on and so forth here. And I could have put it somewhere else but my build directory is going to be here. It's going to generate a directory for it and put all the stuff over there. Let's try doing a regular build. I am logging all over the place. And it actually does. It sets the environment to government. That's the default. You can set it as production by passing in a parameter. It does a clean release the directory if it's already there. It takes also my templated HTML files, converts it to JavaScript templates, and generates a file template for JS. It takes my packages. I'm using this concept of just specifically for this. I have two types of packages. I have either JS or CSS. I use that to know whether I should use Aglify or CSSman or so on. But I suspect that I want all the start up JS files just at this root folder. For CSS, I'm going to take these four files and compress it and make one. It's going to be called styles.css. I also have specified my assets here. So all the JPG files, all the PNG files, and then I realize that I hadn't put the leaflet mapped files so I put those. This is for the little map that shows the Thermodrome College. So I've taken those files as well and put it to R6. You can have a look at the build directory and you can see yes, script.js has been generated. This is concatenated and all that jazz for CSS system. There's more interesting though, is what happens if I say environment production does another build. Let's go back to that. I'm pushing my production stuff into a separate directory called build broad. First of all, notice that the scripts that are generated are now hashified. If you can see the file name there, this is essentially to prevent cache poisoning. The idea is that suppose somebody has picked up script.js from you already and you want to push him on your one, but you already have this many caching one. This is a good idea that you have individual file names every time you do a separate build and so on and so forth. Which is fine. All you have to do is say let me show you how I actually integrated it. The idea is that you have to give hooks to the backing guys or to yourself frankly if you're writing HTML. So I'm using a little templating here and I'm just saying pick out css.styles. I'm passing the css object which has all those packages and I can say, I can choose to load an asset also here so I'm loading that way as you see. Go down you'll see JavaScript also in the same way as js.scripts. Scripts was the name of my package. And suppose I actually run a little server to render that page. Obviously I have a problem. That's the rule. These things always go up right in the middle. Anyway the idea is that it would render those special files that were generated even with the funky file names. Point to note here, so that means that it doesn't even have to be a specific file name that you have to remember. It doesn't have to be script.js or script.js, question mark some hash that you're going to put at the end of it. You can, all you have to do is maintain an output file that you specify the stuff that you've created, what it's going to look like. So these are all the assets that I've deployed. These are all the packages I've deployed. I'll go to the templating bit. This template.js file has been rendered automatically. It's been rendered off of this. This is a very simple stupid templating thing. Basically output the result of X5 plus Z. You would obviously want something a little more complicated. For example, take 10 Flickr entries or Twitter entries and output the data in some specific output format. The idea is that it pushes it here and download or underscore template whatever. It's something I have to find here which basically takes that string and is able to give you a compile template. If you use underscore.template, you know what I'm talking about, but feel free to research the hell out of this. Like templates are cool, especially if you have a neat way of getting them into your flow. So now all I have to say is all I have to say is jst.xyz and I have that template available and then I just pass my parameters to it and get it automatically. Still not touching the STML. You can run a server, you can also run a little watch. Watch is where ideally it's supposed to change every time a file gets built, but just for a demo purpose I'm doing it every 3 seconds. So every 3 seconds, the point is that you set watch and leave it on in the background and continue working with whatever you want. So your output STML can actually keep pulling into built assets. Perhaps you're finding guys are reading your assets.json or I think you just got json and you think that to render it over there. But the idea is that you're not going to be afraid every time you do a change. This is a lot better than for example a whole bunch of server-side files. They have a fresh compiler and they have C++ running on the server. Other things you can also link to the files. I've made it incredibly liberal. I've made it incredibly liberal, but seems like that's the whole positive thing. Anyway, this is the idea. You have a built system and it's a whole bunch of facets. You could probably make a new task where it's uploaded by stream and so on and so forth. Quickly, just for reference, I'd like to show you that this is what it looks like. I have a whole bunch of commands defined and those are now some questions. Go to my questions slide. I wanted to talk about why you had questions. Yeah. Sorry? Why do I have questions? That's the idea. If you give a unique file name, for example scripts.someх.es, Then it's what it's a grid file and it comes up and is a grid address. In general, the tasks are So you won't have to worry about tagging the same thing, or anything like that. In Lexaphire, as in Lexaphire. Oh, okay, awesome. So yeah. It's saying that run is found with packages. Thanks a lot guys. Be sure to thank the Haskeen guys on your way out. Hope you had a great time.