 So, rydw i'n sicrhau bod gwahodd ar gwahanol gyda'r gweithio, ddynion i ddim yn oedd yling i ddau a'r cyfwys i ddau'r hunain. Yn i ddechrau ymdwy'r gwybod, dweud, o'r ddwy yng nghymru embar, a'r dda ni'n hyn o'r defnyddio croses llwybarthu brefyd o'r llwybarth yn rajyn, ac dyna'r wirwch bod gwahodd yn robi grwziau cyflwyno i'n ddorol. part of our latest project which I think people might find interesting. So the first part of this is about deployment, the real world of deployment and how we do it from the developer checking code to the user seeing it on their screen and then secondly some work we've been doing around security, what are users allowed to do within your application. So we'll start with deployment. We work with a Swedish company called Instaure Media who write a content management system for digital signage. So screens on escalators, Burger King, all the screens in an M&S and we're working with them to roll out screens into mother care currently. So we've got about 60 screens in three stores that we've rolled out over the last couple of months. Their server has its own user interface. Let's call it special. And it's great for enterprise applications and Mark Spenser will have eventually about 50,000 screens across 100 countries and they want to be able to target a particular bra advert at all the shops that have that bra in stock today on the correct shelf with a screen next to that shelf. That gets a bit complicated when you're trying to schedule that across 50,000 screens. We've built a new user interface that's aimed at the guy that's got two screens and doesn't want all that complexity, just wants something simple and the reseller that wants to just ship a screen, hang it on the wall, plug the box, impressive button, log them in, walk away and make the money. So we're using all those enterprise features but wrapping it up into a user interface that you can teach someone in four minutes. And we're deploying that in the cloud using Amazon. So this is what we have in our back end. Almost all of it is virtually free of charge. We pay a dollar a month for most of it. So at the top we have a web user coming in to the signpost which represents DNS servers. We're using the Route 53 service from Amazon, which is if you ever want to run your own DNS server, it's a minute amount of money. It's of a dollar per 10 million queries. And it's really, really easy to use and I'll show you some of that in a minute. That's coming in SSL encrypted. We're then using CloudFront as our content delivery network. And the wonderful thing about CloudFront is you can have multiple virtual directories behind it. So you can present it as one domain name. And then you can have virtual folders that actually go off to different servers at the back. So no more rewriting your CDN URLs to go to a different domain, having to work out the security across domain issues. As far as the browser is concerned, it's all in one domain. But we've got an S3 bucket which has our UI in it, all our static JavaScript images CSS. But all of our data is held on their AMS server. It's the name of their software, which is sitting on a Windows box running iOS, running Windows server 2008. This is in a firewall zone in Virginia in a data center. We've got a firewall coming in that only allows HTTPS and only allows it into the load balancer. This CDN can't talk direct to the server. The load balancer is able to load balancer across multiple servers in multiple data centers. So the way Amazon set it up, you have at least three data centers per region. They are 20, 30 miles apart, but they're a millisecond apart on the network. Terabits of data between them. And the load balancer can look at the traffic and the load balancer is itself load balancer across those data centers. It can look at the traffic and say, okay, a whole data centers went down. I'll migrate all the traffic over to the second one. We can set up scripts that say, okay, when that happens, you need to start a server or you need to warm up an already started server. We can look at the amount of traffic on each box, the CPU load on each box, and decide to start a new extra box to load balance everything. And then when the load goes away, automatically step it back down. Amazon call that auto scaling. But the other advantage for us in the load balancer is it does the SSL termination. So we don't need to load the IS box with SSL, which is great. I can just run a bog standard IS, basically install it, install the app, walk away. And the UI is the case of just choosing your certificate, uploading it, and there's a drop down to select your certificate, and you're off from running. So we've then got over on the network, our media players, and there might be hundreds or thousands in a multi tenant environment that we're building, hopefully tens of hundreds of thousands of these. Now these are coming into the server every few seconds saying, have you got anything new for me? And when someone publishes a new 500 megabyte video, suddenly 200 boxes download it. And at the moment that's coming directly out of the AMS. What we're looking to do as a next step is actually have it come via the CDN so that we can cache it for five or 10 minutes and deal with that. The issue there is at the moment the AMS is deciding whether this player is allowed to see that file or not. So as soon as you put in the cache, suddenly everyone can see it. If they know the URL, they can get to it. Again, Amazon has some nice features around signed URLs. You can say in the CDN, you can say I'm going to generate a URL and it's active for the next minute and a half. And after that, no one's allowed to use it. So you can have your file at the back end, but have this sort of layer of security on the front that makes it rather nice. So the AMS itself would normally in a data center environment have a disk where it was stored as media. I don't want to put the disk on there because if that machine goes down, I'll lose my data. So what we have is another S3 bucket with a file store in it and an EBS volume, which is an elastic block store, which is a scalable store and it stores the differences between what's on the disk and what's in the S3 bucket. Every half an hour, we write the differences into a new bucket and delete the old one. So we've always got at least the last half an hour of content floating, but the rest of it's in here and that is mirrored on right across three data centers on SSDs. So the speed is phenomenal and the security and robustness of that data is really, really good. We've then got at the back end a Microsoft SQL database, but we've got it in a separate virtual LAN, again with the firewall between. And the wonderful thing with Amazon is you have these single security groups and you can just say to a machine, you're in security group X and you then have a routing table. You can say, right, security group X has this inbound rule and that outbound rule. And you can say either you can talk to this IP range or you can talk to this security group. So we have a security group for our database and we have a security group for our web server and we say that only the web server can talk to the database. I can't even talk to the database from my office. So it's really, really secure. No one's able to get at that except the web server and the web server itself can't be accessed from within here except from our static IP at our office. And even then we use names and passwords to log on to get to it. All of the security is done on here using Windows protocols. So we've got then an active directory server, we've got actually a pair of active active servers so that if one of them goes down we don't lose any data. And these are actually held in data sensors that are about 1000 miles apart. And we have the option to add a third one if we start to get enough traffic that there's any need for that. So what we've got here is a really, really scalable system. And about 90% of our costs are those two. The rest of it is essentially free of charge. It's a couple of dollars a month. The data storage is virtually free. You'd have to be storing terabytes to actually start noticing most of it's about running machines. What we're looking to do in future is move this onto a Linux box running Apache with Mono and run asp.net through Mono, which again will reduce the costs and reduce the startup time and the Microsoft tax. So that's what we're building. All our Ember stuff ends up here in the easy bit. And then all of our URLs into request the data are just slash AMS slash whatever. And CloudFront deals with routing that down to the AMS and the JavaScript and CSS files get rigid into UI bucket. A really nice thing with this is that I can have multiple versions of my application in multiple buckets. So from a deployment point of view, this is a live version, but I have another CDN and another bucket for my latest dev pointing at the same server. So I can test that latest dev with live data. I can clone the AMS onto a separate machine and start sending rights to it. And I know I've got a clone of the live database that I picked up in 9 o'clock this morning. And I can build that clone in about 10 minutes. And I can throw it away again when I've done my test. So it's really great for developers to start taking a read-only copy of the database, playing around with it, checking that their code actually works with it, checking it with live customer data without having the danger of affecting customers. It also means then when I deploy version two, I can have a second CDN and a second bucket. And I can take one of my customers and move them on to version two with a DNS re-root. So I can give a beta group of customers a different DNS name and root that suddenly onto a second cloud front bucket. If it all fails, in about five seconds, I can root them back again. And they won't see the difference. They've still got the same data at the back end. And I can, if necessary, roll back my UI. Once I've tested that for a few weeks, I can then roll everyone onto the new CDN with just a DNS change. So this design enables us to take a new version of the UI subset of users, especially for development testing. We can roll back fast if we've got any problems. We're keeping the JavaScript and CSS delivery and any images off the main server. Just keep that doing data transformation and security. Keep it doing what it's supposed to be doing. We know that the JS and CSS storage is read-only to the world. No one can come in there and start editing our files and injecting stuff into it. The really great thing is it appears as a single domain. It just gets rid of all those cross-domain issues with servers and CDNs. And we can scale our servers automatically. We can firewall everything. And we set the whole thing up knowing nothing about it. We did it in about two weeks. And I'll show you in a bit how easy that is. So when we deploy our code, we have a multi-step process. And we're using Atlassian tools to do this. We've got Jira for issue management. We use a wiki called... It's gone in my head. Pardon? Confluence. That's the one. And then we're using Bamboo as our build server. And they're all linked together. Automatic hyperlinking between them. So I can take a build and I can say which issues are in this build. Which issues are deployed on my server right now. And it will trigger that off a bit bucket mercurial repository that we're using. So in our script, we're detecting the repo change which is done by Bamboo. It then runs a script, which I've got as a shell script in the repo. So we build. That will find out if there's any compilation errors. We then test in CI mode silently and output the text to a file. And that gives me a JUnit format file with the results of my tests in it, which Bamboo can read and tell me if I've got any new test failures or fixed new tests. We then build it for production, sync that to S3. We're looking at putting different caching timeouts on any fingerprinted files and on our index HTML. And then we're creating a cloud front distribution if we've got a new branch and creating a DNS to it if we have a new branch. So that gives us something where a developer can create a branch, check some code in, seven or eight minutes later. They've got a fully deployed system that is exactly as it will be in the real world. All of the same firewalling, CDN, SSL, minification, everything's identical. It's just a branch and it's all done automatically for them. So let's show you some of that. I'll start by showing you what we're replacing. So if I have a look at mother care 2. So this is the Swedish version of the user interface which I've just logged into incorrectly. Okay. So in here, and this is something we'll come to later on as well. The way their system is set up is we have a set of geographical regions and regions can contain regions. It's lovely and fast over a mobile connection because their main JavaScript file is 65 megabytes. Don't laugh. They might be listening to this lecture. So here we have our region tree. So each of these regions is something to be configured by a customer. It's fully nested and each region can have content on it. It can have users attached to it that can only see things below that region. It can have players on it and it can have rules about what plays beneath this. I can say only English content plays in England, for example. So in here, we'll see if this works. I've got down at Marble Arch. There's a mother care with a 9 screen, 57 inch video wall. And what I'm going to try and do is show you what's playing on it. So this is just sending a command over to the player asking for screenshots and we'll be able to see it playing. So you can see the beautiful design ethic of this interface. I think someone took a wireframe drawing a little too exactly. I think this might not work over mobile, but I'll leave that for a second. So you can see here we've got these regions. We've got players in the regions and obviously this can get very, very big with a large network and typically you'd have UK and then London and then Oxford Street and then the 10 departments in Oxford Street and then inside that the players for each department. That's obviously not going to work. So what we've done is replaced it with now interface. So this is our version of this. So I'll just zoom that up a bit. So we have again our region tree down the left hand side, but a bit more obvious. If I can change the contrast on this. I'll stop it flickering. The flickering screen you might be able to solve by just not having the browser for a bit, but I don't have anything to do with that. OK. Average brightness problem. See how wide I can go. Right. So at the top we have these tabs that allow access to different parts of the application. Now I'm going to have a question later on of why on earth that little dialogue comes up because I want to stop it coming up. I don't know why it's coming up. So this is a different server. So I've got different data in it, but the same principle. So this is our content. These files are cached. We've got a 24 hour cache on them and there the file name has the version of the file in it. So if I update that file, the cache will get broken and it will get pulled through correctly. And then we've got a dashboard that shows us all the players that are available and what they're doing. These are office test machines. So this is an Ember app. And this is our first prototype. We spent about a year building it. It works, but it's horrible. It's based on Ember 0.5 probably originally and it's been bodged in all sorts of horrible ways. So back in November we took the decision to completely rewrite it. And you'll see some of that later on. So that's what we're delivering. That JavaScript and CSS is to live from the bucket. And then the data it's getting is from that IAS machine out in the cloud. Response times are in the region of 80 to 110 millisecond. So it's lovely and fast, nice and responsive. So what I'm going to do is log in to Amazon and show you how some of this stuff works or how you can figure it anyway. I always click on the try this free button in case they stop charging me this month. It's never worked yet. So Amazon web services are huge that that's just the list of services that are available. You've got video transcoding, you've got server management, you've got DNS database stuff, remote desktop, all sorts of things. And every six months they add another five or six services. All of its rented, you can control your payments very easily by just using it when you need it. And it's I can thoroughly recommend using it. So we have on our EC2 side, these are the elastic compute services are basically virtual machines in the cloud. And I've got seven machines running today, some of which are my build servers and some of which are live systems. Now, do I have my build servers running so I need them in a minute? Yes, I do. So here's the server that we were talking to a second ago, mother care server. And what I can tell from this is what it's doing. So I've got monitoring down here and I can watch CPU usage, disk usage, network usage. So I can monitor this set alarms against it and track. Have I got my application I'm running or any clients about to phone me up and start complaining? That's really nice. I've then got my load balancers. So a load balancer is just literally create a load balancer, give it a name, tell it what protocols you want to support, you're done. You tell it how to tell. So if I just take this one, let's call it a VC one. We just do on port 80. I can tell it how to know whether a server is working or not. So I can give it URL to go and get and if it gets a time out, it'll assume that server's dead and start another one. So within a couple of minutes, you're from running again automatically at two o'clock in the morning when you're asleep. The really cool thing for me is the SSL handling, which is always an absolute nightmare in my S box. But here I've just got a listener that's running on four or three. And I'm using my S still certificate that I've uploaded. And I can either upload a new one or just pick one of my multiple certificates that I've already uploaded. So when my certificate expired, seven or eight minutes later, I have the new one uploaded in working. Next time I'll think about it before it expires. Caught me by surprise one morning. But the nice thing was all of my health checks failed because I'd set the health checks up to work over HTPS. So as soon as it failed and timed out at midnight, bang, I got an email one minute later that it wasn't working. So that gives me my load balanced windows boxes at the back with SSL on the front. What I now want to do is pull in some. I want that. I want to bring in my CloudFront CDN. So what CloudFront's doing is representing multiple virtual directories as a single site and caching that for you as well. And I've got a lot of them. They cost nothing to have. They cost money for data. But you pay the same fee for transferring data out of S3 directly as you do for transferring it via CloudFront. So essentially, CloudFront is almost free. You pay per million queries as well. And it sends per million queries. So if we take, in fact, the one we're about to look at, branch default. So this is pointing at my deployed default branch from Mercurial. There's a bucket called branches and a folder within it called default. Now we spoke, was it last month we had a lecture about ember deploy? November. So with ember deploy and various other services, you'd put your fingerprinted assets in a CDN just all in one folder because they were unique names. And then you put your index.html on a Redis server or something similar for really fast deploy. The problem with that is that that fingerprinted assets deployment is going to get full. When it does, you won't have a clue which ones to get rid of. So you don't know which ones you're using. The nice thing with this is I get a folder per branch. So when I close a branch, I delete the folder and they're all gone. And I put the index file and all the assets in the same folder. So all I'm doing is deploying the dist folder into that bucket. So in CloudFront, that's seen as an origin. So I've got two origins here. I've got one that's the S3 bucket. And I've got one that's my IIS load balancer. So I'm not talking direct to IIS. I'm just saying here's my load balancer. And because it's all interlinked within Amazon, if I edit this, I can pick up the domain name and it will tell me there's all the buckets that I could be using. I'm just going to tell it which path to go to. And I can if I want to put security on this and say you have to be logged in to Amazon to see the contents of this. In my case, it's public, I don't need to do that. My other bucket is my IIS server. Now this one is slightly different. We've got a DNS name pointed out of the IIS server. Now that means that I can have multiple CDNs pointed at this server, but I can change the server out for another one by changing the DNS. But I can still have all of my branches pointing at the same server. So I've got that sort of indirection point in the middle. And that's actually implemented as an alias in the DNS server, which means it's free of charge. All it's doing is doing a look up onto a look up onto a look up and then picking the IP address. And in this case, I'm selecting match the viewer. So if you come in on port 80, you stay on port 80. If you come in 443, you stay on 443. Whereas on the S3 bucket, sorry, it's not there. It's on the behavior. So that's defining my origins of where the data is coming from. But it's not saying which virtual directory to use or how to recognize should something come from one or the other. So that user behavior. So I've got one that's a default behavior, just all your URLs go here. So I'm picking up my origin, the one I defined earlier. I'm then saying redirect HTTP to HTTPS. And that's being done at the CDN level. So it's going to return a 301 permanent redirect if you come in unencrypted, which means I can guarantee that my login credentials, even though they using basic basic authentication on on their IS system, I still know it's all encrypted across the Wi-Fi. I can choose which HTTP methods to allow. And in this case, it's read only. So I'll just have get head and options. We can choose to forward the headers on to the server. Now, in the case of S3, it's not going to do anything based on those headers. So I'm not going to send them. And what that will do is cache that file regardless of what the headers are. Whereas I can choose to whitelist the headers or send all headers. And then it will keep a different version in the cache for each combination of headers. And obviously I want to keep the cache nice and small. I can change the TTL and override the TTL that's coming in from S3. In my case, I'm actually writing a header into S3 when I upload, so I don't need to do that. And I can optionally forward cookies. Obviously S3 is not going to do anything with them, so I won't forward them. And then again, we don't need to have multiple cache versions. So 10 seconds at a time, I've got myself a behavior. So that's going to send all URLs down to the S3 bucket if I just have that one. What I'll do is put before it another behavior which says, if the path looks like this, then send it off to my origin, my IIS origin, which is the one I defined earlier as being the DNS name for the load balancer. In this case, I'm still redirecting HTTP to HTTPS. So now I'm restricting, if you're going to go back to those servers, which are in different data centers to the CloudFront system. So I know now the internet traffic between those two is encrypted. And the termination point is now within my VLAN, within the Cloud. I'm going to allow all of the HTTP methods. I'm not going to forward headers, but I am going to forward cookies, because that's where my session tags are. Speaking of which, you don't see it on here, but in their raw URL, you actually get the session ID in the URL, which is great for caching. Because every time you log in, you have to redownload this 65 meg JavaScript file. I'm saying nothing. So I've now got a CloudFront distribution. It takes about 10 minutes to start up. It shares it across multiple servers. And they've just got a big cloud of servers that share the load across all their different sites. So you're not renting a server. You're renting part of the cloud. Now, the tricky bit with that is that I also want to have SSL terminated on CloudFront. Because as far as my user's concerned, they're not seeing my load balance. So they're seeing the CloudFront CDN. So if I'm going to have SSL on there, the termination point is there, I can't have a man in the middle. So I also need to put my certificate on here. Now, the way certificates work is they're based on the IP address of the termination point. But this is shared across multiple servers. So one of your requests will go to the server that you first terminated on. The next request goes to a different server. And it'll have a different DNS name. And that breaks SSL. So until about six weeks ago, if you wanted to put SSL or your own certificate on a CloudFront distribution, i.e. have your own DNS name, then you had to pay them $600 a month to have a dedicated server. It's a little bit pricey for my liking. What they've now done is implemented SNI, which most browsers support, which will use the DNS name that you've rooted through to identify the certificate rather than the DNS name of the server you're connected to. Some of the older servers, and obviously most of the IE7-8, I don't think support it, but most modern stuff does. And I'm trying to work out where that's set up. It must be. Here we are. Does your B2B, are you able to restrict the browser usage away from those? No, people are going to log into it in their phone. They're going to get a phone call saying the wrong piece of content's up. We've sold out of that item. Take it off the screen. They'll be on their phone in somewhere else trying to do it. So all I have to do in here is say we've got a customer SSL certificate. Pick that same certificate I picked earlier, and just say only clients that support SNI. Otherwise it'll send a 404. In my case, I'm forcing redirects onto HTTPS. If I was allowing HTTP, it would root them back onto the HTTP. And then I just set what the default root object is, and I'm done. Now I can set that up now and know what I'm doing in five minutes, what I'm currently doing is scripting it so that when a new branch gets created, the Cloud front CDN gets created on top. Now I'll come to that in a minute. So I'm looking at the moment of how it's deployed. I'll then look in the next piece of how we get it there, which you'll see in a second. So where was I? There we go. So this is the domain name that we end up with. Absolutely horrible. And if I wasn't putting my own DNS on the front, no user's going to want to know that. So it's great if I'm delivering my assets via this and my index HTML through Redis or something like that. It's brilliant, and it caches really efficiently. But it's not great if I want to do my whole site through it. And that's where Route 53 comes in, in combination with SNI in the case of HTTPS. So Route 53 is the DNS server, which has disappeared from my view. There we go. And I have all my domains on here. So our public website is in an S3 bucket with a Route 53 DNS name on the front. Hello. There we go. So we've actually got four domains hosted on here. So we've got our INSM.tv domain. So in here, every feature of DNS you can think of is implemented in here. None of this. Oh, it's not supported by my host. Chuck all of your DNS, wherever you're hosting it today, chuck it into Route 53, or cost you cents a month. And it's just so much easier to manage. I was using Plusnet before, and they'd support some things, but not others. And I actually got to a point where I had to authorise a Google Office count. And there were three different ways of doing it, none of which were supported by the Plusnet DNS system. Move it on to Route 53, and all of them are supported. So we've got all sorts of odd C names and things that are and text records here that are authorising various things. And somewhere in here is my branch default. So this is branchdefault.inism.tv. Literally edit the first part of the domain name. I don't know if you can read that from the back. So if I can zoom that up. Yep. It's an IPv4 address, but it could be any of those DNS records, some of which I have never heard of until I came in here. We've got the alias target. And what I've done in Cloudfront is said, this can be fronted by branchdefault.inism.tv. So when I click in here, it will, if I just get rid of that, it'll tell me what's available. So there's my elastic load balancers. So I could hook it directly onto a load balancer and take it to IAS or Apache or whatever's behind it. I've got my Cloudfront distributions. Or I could alias it onto another one of my records. So I can have public.inism.tv, which routes onto v1.inism.tv, which routes onto a particular CDN. And then I can move public to point at v2 or v3 and roll it back again. I can have beta dot and route that to a different CDN. So it's really powerful for rerouting. And because you pay per tens of millions of queries to even be charged a dollar a month, you can just set your TCL to five seconds, 20 seconds. And within that time, you're up and running. When you first use it, it takes about 12 hours or so for the world to realise that you've moved over to v3. But from then on, it's lightning fast. So there we are. There's my Cloudfront distribution. And I saved that. And I'm done. You can also do round robin. You can give it multiple addresses and evaluate the health of those addresses in the DNS server and only round robin to the ones that are currently up and running. It's just fantastic. We're probably using 5% of the functionality of this thing. So as you can see from this, we use it a lot. So in my case, what I want to be able to do is have all of my branches automatically deployed to my S3 bucket and to Cloudfront and to the Route53 DNS. And most of the way through doing that, you'll see the first parts of that today. And I might come back in six months and say I'm nearly finished. And I think I have on here a bucket explorer. I can actually see what's in S3. Maybe not. I thought I did. There we go. So if we have a look at the S3 bucket, we'll see what we've got deployed up there. And then what I'm going to do is make a change to my user interface and check it into the repo. And we'll see it go live into my branch. And before I do that, I'm just going to check that the server's running because it's quite slow to start up for which I want to curve. So while that's going, let's have a look at. So if I have a look into my, so I've got a bucket here called Inasmui branches. So Inasmui is our project. And here's our bucket. And we have a folder in here per branch. And what the build script does is to take the branch name from the build script, use that to generate the folder name, generate the folder that's not there, and then sync to it. And there's your dist folder deployed. Try to. That's the flickering project. Oh, sorry. I should be able to see that because I can see it over there. I can't zoom in. So there's your dist folder deployed. And so that's the folder per branch mechanism. And currently, the Cloudfront and Route 53 you've just seen were done manually. But the next step will be the deployment script. We'll create those when it sees a new branch using the Node.js SDKs for Amazon. So on my bamboo system, I can now look at what's actually deployed, which issues are deployed. Did my bug fix make it? Let's try logging in. That would help. My card expired earlier on. And it looks like they've just disconnected us. That's nice. I gave them a new card about two hours ago. OK, that ruins my demo. That's a pain. That was where the rest of my demo was going to be. So that might mean that my automated builds are not going to rumble. Let's see what happens. It may just be they've locked me out of the UI. So what we have in our application, yeah, it's based on brightness. Good. It's not zooming very well, is it? I'm not used to this. How do you change the font size? Where is it? Who's going to change font size in Sublime? Pardon? Ah, font. There we go. Right. So I'm giving away the title of my app measure. Right, so in here we have somewhere build scripts. So these are checked into the repo. In bamboo, there's a plan that says, run this script, then this script, then this script, then this script. And if any of them fail, don't run the next one. So if our tests fail, we don't deploy. Which means I'm guaranteed that I can't accidentally deploy to my public system something that doesn't work. As long as developers, obviously, have written every test under the sun, and it's got 100% coverage, which always happens. So the first thing we do is we just set up. And what this used to do was go and install embassy li and bower. It's running on a Linux box that is configured by Atlassian. So you don't have the ability to put global node installations in. So what I'm doing is setting up the path to point out or to include the bin folder of bower or embassy li. That was about three quarters of the time taken to make this work, working out to do that. Originally, I was just doing npm install bower, npm install embassy li. And it was great. And my build script ran in about seven minutes, of which six and a half minutes was running npm install. Even if it had already installed, because it will leave the server running for two hours. We pay per hour. So at the end of the hour, it will say, if it's been idle for an hour, shut it down. Otherwise, leave it running until it's been idle for an hour. So obviously, if you then rerun your build, it still takes six and a half minutes because it gets all the three or four responses. But there are so many hundreds of packages within these things that it was doing about six a second and still taking six minutes. So what we now do is we write a file when we've installed it. And we look for that file and then don't bother running npm install if that's the case. And that's taken 90% off of runtime. Obviously, the first time the machine starts off is a virtual machine. It doesn't have that and stuff installed. So it will take a long time, the first time. But then for the next two hours, and if people are checking in through the day, it just never stops. And at the end of the day, it shuts down. So for the rest of the day, it's really, really fast. So don't be the first person in the office in our company, basically. Yes, because these are the fixed dependencies. We know we're going to want embassy li. We know we're going to want vour. So that's the basic setup of get the stuff we know we're going to want to install. We then do an npm install, which picks up the package JSON stuff. Now, I have looked at hashing the package JSON and storing that somewhere. But then if you do that, you're not going to pick up new versions of things you're dependent on. So if someone releases a new version, it's going to break your app, but you won't know about it. So the advantage of this is you know there's the fixed stuff, the power and embassy li, where we want a particular version. And we don't really care if that changes once a day. But your dependencies in your application are much more important. So the next step is to the bower install here isn't refreshed once it's been installed once. That's it for the day, because that's just a tool we're going to use to deploy everything else. But I'm not basing my application on it. So I'm installing tools once, but anything that's included in my application, I'm installing every time in case it's changed. But this takes 10 seconds. This takes four or five minutes. The next step is, yes, but what it will do is it will look all its dependencies, and it will go to the repo and say has it changed, and it will get back a 304 response. And there are so many hundreds of dependencies that although it takes milliseconds for each, it's still four or five minutes. It's doing six or seven requests a second, and it's still taking four minutes. Yes, the dependencies of bower is the tool. There's so many, so many deep nested dependencies in there. And quite often, the same dependency used in multiple places in multiple places in the tree. And obviously, you get a 304 response a second time, but it still takes a 10th per second each time. So we then, as a second step, do a test. So this is actually a new step, EmberBuild. We were doing an EmberTest CI silent so we don't get any of the, you're using Ember 1.5, building dot, dot, dot. I've set up the CI mode to output a JUnit XML format, and then pipe it to a file. And then Bamboo will pick up that file and parse it and show me, and it's a real shame I can't show you, but it will show me you've got one test failure since the last build, and it's this one, and here's the error message. And it'll send an email to my developer saying, you did the check-in, and you've caused an error fix. It's your responsibility. And the developer can go to that page, and they can click Add as a Jira bug. And it will create an issue, assign it to them, link it to the build, link it to the test failure. And suddenly everything's hyperlinked together, and we know exactly what's going on. When we fix that bug, not only will the test pass, but it will go and log against the bug fixed in this version. Really, really nice. Now the problem with that is that if the compilation fails, we don't know about it because it's silent. And that bit me on the bum yesterday. I spent ages trying to work out why suddenly test results weren't getting generated. It's just saying, far doesn't exist. Can't parse it. And then I added the ember build at the top, where it said, oh yeah, you've got a syntax problem, you can't compile. So we're actually effectively building twice here. We're building once with all the development stuff just to check it. We're then doing another silent build and test run. It runs in phantom.js. We're doing about 250 tests at the moment, 10 to 12 seconds. And once that test is finished and has written that file, if it's succeeded, we then go on to do a build. And what this does is a little bit old now. We inject into the environment file the build number of bamboo. So we're inserting on the end of the 1.2 version number. Sorry, we've got it in the environment file as major, minor branch name and build number. So the bamboo is generating the build number. That's the number of times it's built on this branch. So it'll start again from zero for the next branch. It's giving me the branch name. And these are coming in as environment variables when bamboo runs Node.js. And there's about 50 or 60 environment variables available. Where did this repo come from? Who's the user that triggered this by checking something in? What's the check-in ID? What's the branch name? Date and time, version numbers, all sorts of things. So we can just pick them up as name values in here and run through. This is a shell script, obviously. So I'm injecting them into the environment.js. And I'm going to replace this with writing them to a file and then using a content for that's now available in Rockfile to inject them in that way, which is much needed. We'll put them into a meta tag and then have initializer that reads them out of the meta tag and puts them into the session or something like that. So once we've done that, we've now got a disk folder sitting there after the end build. And the next step is to deploy the branch. So at this point, we've got some more requirements. I'm installing them here because if the build fails, I want to know early. So I want to fail fast and build reliably. So I want as few steps as possible up to the end of testing. And then I'll go and install the things I need to deploy rather than putting these in the setup script. So here I need the S3 client. There's a lovely Ember S3 client, the Amazon SDK, which gives me all the access to Route 53, Cloud Front, and short ID so that I can generate unique IDs for the Cloud Front CDNs, which Cloud Front requires. And then I just run node on my build script, which is this one here. And again, this is using environment variables from the plan. So I do not want to check my security credentials for Amazon into my repo. So if I share that with a contractor or with a third party, I've just given away the keys to the castle. So we put those on the plan. We make them visible as variables, but only to administrative users. So a developer can see the plan, but they can't see the scheme of keys. And they never end up in the repo that passed through his environment variables into the build. We pick up the bucket name, which is a variable on the plan, so you can define your own named variables. And you can override them in different branches. So in my public branch, I can override it and say, right, that's my deployment bucket, or I can make it my version one bucket and my version two bucket later. And then in my branches that developers are controlling, I'll have a different bucket, so I'm keeping them separated. And then I'm taking the bamboo repository branch name. This is the Git or Mercurial repository branch name. Replacing spaces with dash so that it will get through a DNS name. And then I've got a branch name that I can use. I then go and create an S3 client, just some default parameters that set up how many retries and how many threads it uses to upload. Give it my security options. And then set up some maximum sockets to get nice fast parallel uploads. And then this entire piece of code here is the upload to S3. That's how simple it is. And 90% of that was copy-paste from the example. You give it a bucket name, which is my bucket I took above. I'm prefixing it with the folder name. Now, S3 is not actually a directory folder system. It's just a dictionary of key value pairs. So what looks like a directory path is actually just a key. I'm giving it cache control headers. So I'm saying it's 15-second life. And I'm giving it an access control of public reads. So unauthorized users can read. No one else can do anything with it. So immediately now I've got a fully lockdown bucket. There is no way for even my developers to log into that and overwrite those files. I can only overwrite them by using this script and those credentials that even my developers don't know about. So I've got one for production for live. And I've got one for all my branches for dev stuff. So the production one just has one folder, which is version one. And what I might do is put version one, two, three, four as subfolders. I haven't really decided yet. But the branches folder has probably broken stuff in it. So I want to make sure that that's kept well away from the live stages. It's a temporary setting. I want a developer to make a check-in, and within a minute have it live, so that they can, and that includes running a full test before it goes live. So I don't want them waiting for the cash to time out in their browser. Now because the assets are fingerprinted, what I'm going to do is my next step is sync the whole folder with a one-year time out. And then I'm just going to sync the index HTML again with a 15-second time out, which means that they'll get the new index HTML with the new links to the new fingerprinted assets really quickly. But everything will be cash that will be nice and fast when they're playing around and testing stuff. We then just upload, and on end, we then go and create the CloudFront distribution. Now you saw how simple the S3 was. You just tell it where to put the files. The rest of the file is how to set up one CloudFront bucket. Those are all the options you've got. It is huge. And that's why I haven't done it yet. I spent a day basically mapping what I've done in the UI, which I know off my heart now, into what these settings are. And obviously there's a lot of stuff in here about streaming and all sorts of things that we're not using, so we can comment those out and get rid of them. But it's very nicely done in the documentation. So when you download this example, it comes with all of the possible options sitting there for you. So you just choose which one. So you don't really need to go and read the documentation, almost all of it. You can just go, oh, that sounds sensible. Put those in and see what happens. So that's my deployment script. Now, in order to demo it, I want to show you a security system. So security. We have things like simple auth that can take your username password, log you in, give you a session object, manage log out, do all of the security side of things. But all it gives you is a session. It doesn't tell you what that user's allowed to do on your system. Now, what we've got in the UI is, if I go back to that, all these regions. Now, I can put rights for users on regions. Read, write, or deny on any named role name or what they call an action name. I'm reformatting that to be a role, which means I can give, for example, a reseller admin access to their region, but their clients, they're not allowed to see their clients' content, for example. They can go and look at their players and check they're still running, but they're not allowed to see the boardroom screen at Vodafone and see next month's results coming up on the screen. That would be particularly dodgy. So in our case, we don't just log in and boom, you have some rights. Those rights vary as you move around in the application of what you can do, what you can see, what buttons are available to you. I definitely don't want to have that situation where you press a button, and then it says, oh, no, you're not allowed to do that. That's really frustrating, especially if that's an admin action that you'll never have. It's not that you haven't upgraded to that function, and we want to say, actually, you should buy the upgrade. There'll be a whole load of things that resellers can do and clients can't do. So we need something that's role-based, but we also need something that's role and region-based, because your roles will be the same, but the rights will be different on each region. So in fact, that's not true. The user will be the same, and the list of roles will be different on each region. So we've built something that I think is pretty specific to us, but it might give you ideas of how to do this yourself. So we want to know what's this user allowed to do? What buttons are they allowed to see? What do we render into our template? Which context menu items are valid? So sorry. So if I've got a context menu, I want to remove just one item from that menu if the user's not allowed to see it. But I then want to hide the menu if there's nothing available. Can we call an action on the controller? Can I even visit a route? Should I block them from visiting a route completely? And that will depend on where they are in the application. So our solution is to add a security service. And it has a may-do function on it, which takes a region, the name of a module, and the name of an activity. So the dashboard module will have view player and reboot player on it. Now, obviously, only administrators can reboot the players, but users can view the players. So we want to be able to do that on routes. We want to be able to stop people visiting those routes. So we've got a mixing, or building a mixing that's not there yet, which will allow or disallow the transition and work out where to route you instead. So obviously, if you've been sent a URL by somebody saying, please visit this and see if you think what I'm doing is sensible, if you don't have the rights, it'll reject you back to the index page or something like that. In a template, I want to be able to show or hide content. So I want to wrap a component that will call into the may-do service and work out if it can show it. And in a controller, I need something that returns a boolean so I can say, if this, then add this to an array, or don't add it to an array, or change the result. Now, we then also, we obviously don't want to query the server continuously to say, are we allowed to do this? Are we allowed to do this? Are we allowed to do this? So we're using embedded data to store a model that represents a right on a region. So we have a model which is a region right. And this has, in their system, they have content roles, system roles, and user roles. So user roles are things like, can I add or remove users? System roles are things like rebooting players, and content roles are about viewing and editing the content that ends up on the screens. And I can separate them quite nicely. So within those, we have a role object, which is here, which has a name and an access level, which is either read or write. So we can have roles that have multi-levels on them. So I can say you're a content editor, and if you have right access, then you can edit the content. If you have read access, you can view the content. And what we can do in their system is say that a particular user has a certain set of roles with read or write on this particular region, and that applies to that region and all the regions below it. But as the UI developer, I don't know that. I'm going to query a particular region, and if it's not authorized in that region, it'll bubble up the tree until it finds one and give me that. So we then have an adapter that will pull that down. And that's this one. And it's a little bit complicated because the API only allows you to query one write at any one time. So we have to go through all the possible writes you might have and ask if we have all of those, which is a little bit clunky, and we're hoping that they'll release some new APIs that allow one call. The problem is, of course, this needs to know about all the possible writes that might be available. And I don't really want to store that knowledge in the adapter. So I'm actually querying this security service to say what writes are available, what do you know about in terms of writes, and then we'll go and query each of those in turn and bring them back as objects. Now in the controller that queries this service, I need a synchronous response. I can't have an asynchronous response. So we have then got a region root which goes and loads that access write when you visit it. So where's my roots? Here's my roots. So in after model, which by the way fires after the URI entry or a transition, whereas model and before model only fire if you come in through a URI, but not through a transition, which caused me a few little issues as I'm trying to work out why it wasn't loading on my content. So after model, it's called after both. We know the region ID. We're returning a promise. Go and find the region write for this region. And that will load up all the things that I'm allowed to do on this region. And therefore, that's now available synchronously when I call store.find and I'll be able to query that in my security system. So if I look at my components, we've got our if may component. You give it a region ID, a module, and an optional activity. And if you only give it a module, it will say, is there anything I'm allowed to do in this module? So it might be, for example, to show the top level menu on the application bar if I'm allowed to do anything on the dashboard, show the dashboard tab. Whereas the dashboard will have a refresh button that says, are you allowed to do a refresh? If so, show the button. So all this has to do is go to the security service. We get whether it's authenticated and whether it may do this module with this activity. And it's as simple as that. The controller can call the same method to say, can I do this right now? And then this is updated if any of that changes. So all the functionality is in the security service itself, which is here. Now, I'll start at the bottom, which is where we define what's possible. So here, I've just got a JSON object which says, if a user is a system administrator, that's the role that the back end knows about, these are the UI things that they're allowed to do. So if they have right access to administrator, then in the dashboard module, they can view the player and they can reboot the player. If they are a system user, then they have to have right access to reboot the player, whereas admin can do both straight away. Now, what I have in the application here, if I run this, which I'm already doing apparently, where's my local host? Now, I have a managed tab. And in that managed tab, I can view the regions and stuff. And I've got my dashboard here, which is what we were just looking at. I'm just going to load some data. Why are you doing that? It should be faster than that. Now, this is interesting. Who can tell me what that is? Cannot read property parent node of null? I see this about one in 20 times running something. Well, my body's there and my document root's there, because I can see content. Do you know what actually causes that under the hood? It seems like it might be bugging with M1.5, because it wasn't happening with M1.4, and it's just started happening. Anyway, it's biting me too many times. I'm getting annoyed with it. So here we are. There's our dashboard. And that is in the application template, which is here. So we've got this top menu item component we've created that wraps up some divs and some buttons and a link. And it gives it the caption, the icon, the link target, which is when you click into this region tree, where does that go to? In this case, it's taking us to the dashboard. But if I'm on a different page, I want the same nested region template to take me somewhere else. So we're actually pumping that into the region tree link to component. So this isn't showing me the manage tab, because in here I've said, if you're going to show the manage tab, you have to have rights to the module manage. Now this user is an administrator. So as we saw in, let's just double click on that. So in security, we said an administrator can see the dashboard module, but they can't see the manage module. So if I manage, and I'll just add a dummy action that this user can do, so if I just do view region and save that, Ember will rebuild. By the way, if you're running on Windows, run your command prompt as administrator, and your compile time will go up by a factor of 10, or go down by a factor of 10, because of the sim links that Joe and Co have added in. So it goes from 50 seconds down to five seconds. So now we get a managed tab. And all because I said that yes, the administrator is allowed to do that. So that's really nice. I can now set up on the server what roles the user has, and I can define in my application what that means for the user in a really easy to use way. Now the way that's working is the first thing I do is transform it, because what we're going to query is can this user do dashboard view player? Now if I had to iterate through all the roles to do that and find them, that's going to take a long time. So the first thing it does when it starts up is to invert that object. And it creates a new object that's keyed on module activity, and then gives a list of roles you would have to have. You can have any one of these roles and you'd be able to do this. It also caches when it's actually found that right and actually returned true or false. It'll cache it on a session just with region ID, colon, dashboard, colon view players, true false. So next time it queries, it just goes straight to the cache, bang, and it's got the result. And the entire code is here. That's it. It took us a data right, and it's proving to be really, really powerful. It's very specific to us because it uses the region tree to store that data, but if you took the region ID out, you'd have quite a nice generic security manager that would be role-based. So the challenge to somebody is to make an open source one that doesn't have a muck inside it. So I've made that change. I've got on here if I go to branch, I'll go to that DNS name that we had earlier, branch defaults.insm.tv. This is going back to the same server that we're seeing locally. So I'm using a little back script that's running ember with a proxy on it, which we can see somewhere here, somewhere there. Anyway, it's proxying. There we are. Back onto my now.insm.tv server. So it showed me live data in my desktop application. Now, this is showing me the S3 bucket that we just saw with the default folder in it, and our login again. And as we'll see, we'll only see the, driving me mad. Does anyone know how to stop the browser putting that pop-up up and route to my page instead? I'll take it as a no. Right. OK. That was our test for trying to get rid of that. So here we've got that version number. I don't know if we can zoom into that. Whoops. So there's our version number. 1.2 came from our environment.js. 104 is the plan build number on the bamboo server. And this is the default branch. If I go to my branch where I was working on deploy to S3, and I'll log in again, because obviously it's a new domain name, now we see that it's also showing me the branch name, which is really useful to know what I'm actually looking at right now as I'm going through my different branches and viewing them. Obviously, they all look the same. That's really nice for telling me what I'm currently showing. And that's just built into the environment.js by that regex that you saw in the build script. So going back to branch default, that's the version that only shows us the dashboard. So what I now want to do is deploy the change I've just put up, this change that enables the managed tab for administrators. So I'll save the file, go into source tree, text my change, commit it. And I've commented out that. And I've added that. So let's do ember, remote change, push that up to the server. And what will happen is Bitbocket will spit out a webhook into bamboo. Bamboo will work out which plans are to be run. It'll download the repo, go and get my build scripts, run them. And in a minute or so, we'll see the data. And if I could log in to bamboo, we'd see that in progress. You actually see the logs every five seconds as they're generated from ember. So you actually see the progress of your build script as it runs in your browser. It might have worked out now that I actually have rights to this. Now I don't know if it's going to work because I don't know whether it's turned off the build script system as well as not letting me log in, but we'll see. It works. Right. So I can now go down to, here's my plan for this particular project. And first thing I want to do is just check my elastic servers are running. So these are my server instances. And yeah, we have an instance, and it's building our job. So all I've said to it is that you can choose on your job whether that's going to be built with Windows or Linux. You can put flags on your image, your virtual images to say this one has Maven on it or whatever else. And you can set requirements on your plan that it must run on a machine that has these features. It will choose one, start it, and get it running. So in here, this is my live logs coming off the build process. So there's my setups. It's already installed Bower from the previous run. It's currently sitting there doing a test run using Phantom.js. So that's going to write the XML file at the end. And that's finished. So now it's going to do a production build. You can see there that list is all of the environment variables that it passes through. So you've got the build key. You've got the grails version. You've got the job keys. And we've got the project name here, INS and UI. And somewhere here, a plan repository branch name is the one we're pulling out to get the default folder name and DNS name. So that's going to finish its build in a second. And that will deploy. And then once it's deployed, we wait 15 seconds, and we'll see the new code live. Now the nice thing is, because it's doing a production build, I can use Ember with proxy on my local machine to see the uncompressed, debuggable stuff, but against live data. And then I can do the same thing on the server. And I'm seeing it with SSL. I'm seeing it with load balancing. I'm seeing it with compression, minification, all the latest libraries. And I know that that's exactly how it's going to build in real life from a production version. And there's only differences. I know there's a new version of package, or there's something that's different, and I can go and check it out. I'm also using the latest version of Ember, or Ember CLI rather, rather than the version that the user on the desktop has put as their global settings. So when Ember CLI updates, they don't necessarily get to know about it. But the build server, we're going to get the latest version every day. So that's finished. And we have some errors. Oh, we have a warning. That's fine. So in my logs, I can get the full download of the logs. Here's my tests. So I've got 231 tests running. There's my list of all the Phantom.js tests. I don't know if there's any information on these. There's a summary result. If I go back to a red one, this is one that failed. And it will tell me why. So that's one where the test didn't run because it didn't compile. And obviously, I can go back to the full logs and see why that was. And I'll see, hopefully here, I want to see one with a test failure. I can actually hover over them to see where I've got test failures. Oh, here we go. There's a test that failed. So I'm actually going to see in my tests, here's my failure. That's the test that failed. Here's why it failed, my full log coming out of it. So I can actually go, null is not an object. Great, I can go and find out what that is. It happened within a minute, minute and a half of me making that check-in. The developer's got an email saying that there's a failure with a hyperlink to the bamboo failure. And if I now go back to my UI and refresh, I should see, hopefully, a managed tab. There we go. So there's my new version live. So there's a lot of steps in it, but it gives me something that's very, very powerful. It gives really quick feedback to the users. And people can just sit there checking stuff in. They're getting a full deployment without affecting real users. Bamboo will pick up new branches automatically. So it queries Mercurial for branches and will run the plan with a flag on it saying this is a new branch. So I can pick that up and create the Cloudfront and the Route 53 DNS automatically. I can then have another one that looks for closed branches and goes and deletes the files, deletes the DNS, deletes the Cloudfront, so I'm not paying for it. And cleans everything up, obviously. I don't want to leave all those branches lying around forever. And it means we can do issue level branching and bug level branching with impunity. And we still got all the benefits of that full CI end to end. So it's work in progress, but I think we're certainly liking it. And I can recommend it as a way of deploying. Any questions? The one thing we are looking to do, the guy that wrote Ember Deploy is back next week. We're going to have a conversation about making this deployment mechanism one of the options of what Ember Deploy can do. At the moment, he's done it with adapters. So you have an adapter for S3 and an adapter for Redis. And it automatically sends all your assets to S3 and your index to Redis. But there's no choice. So I'm going to suggest that we rebuild it, that you have some adapters. But you then say, well, here's my regex on path names, send those ones to S3, and then send these to Redis, and then configure CloudFront like this, and then configure DNS like this. And sort of set it up as just a JSON file declaratively. We're going to see if we can work together maybe on the project night to make that work. Anyone that wants to help, welcome to join us. Just a quick question on this. We asked questions over the monitoring that we classed it early on, but do you use the advanced monitoring to provide the best and most of the base monitoring? The only difference with the advanced monitoring, it seems to run more often. I'm actually finding it absolutely fine. But they've also just released a new service where you can run Node.js scripts, and you pay per millisecond. Yes, it looks really nice. And you can run things on timers. So you can set up your own monitoring. Because if your script runs for 20 milliseconds, you pay a 10th of a cent. Run that once a minute. OK, you pay a dollar a day to have it there. And you've got full control. I'm looking at that as being something we might look at for uptime monitoring, more detailed, diving into more URLs than just the one, and see where that gets us. So it sounds like you solved your problem very well. I was wondering if in the process of doing that, you considered Docker as part of the infrastructure role. Didn't even know it existed. I looked around. Basically, I looked at Raxbase and AWS. AWS is continuously less expensive. They give you, you can pay $100 a year and get full access to their support team. Now, their support is amazing. Pay for the business $100 thing. You send them a support inquiry and you say, I don't know how to get CloudFront working. Urgent. My service is down. Four minutes later, they Skype you. And they talk you through it at two in the morning. It's just phenomenal. You can even press a button and the phone starts ringing immediately. Yes. And then you're talking to them. You pick it up and it's ringing. And then they talk to you. And if you're talking to a sales representative, I count this too, it's fine. I've never bothered with that. I just go straight to the support inquiry. And they're brilliant. The tech support you don't get for free, right? But it's something like $100 a year. And by God, is it worth it? The tech support for free. But if you go into AWS with no previous experience, there is so much stuff there, you won't be able to work out which bit you need. And I really recommend you go to the support guys and say, right, I want to run a server with load balancing with SSL. Oh, you do it like this, Bob. And I went to them and said, mine doesn't work. Why not? And he dived in and came back an hour later and said, I'll fix it for you at two in the morning on a Saturday. I'm going to say, thanks so much.