 champion dde 꺼 ar y llannt. Thank you very much for coming along to this session. My name is Will Mawrish, I'm the director of cloud services at internet VDC . I've been doing network and technology business now for about 15ish or so years. And if any of you've been to any of the talks I've done previously, I only finished the last one on this stage about thirty minutes ago. There are some duplicate slides in here, so apologies if there is any repetition between them. So, Dreupel. We love Dreupel. So much so, we're actually built on it. We redeveloped the entire front end of our site, all the shopping cart technology and basically if somebody visits our site and goes through any of the workflows on it, it's all now built on Dreupel. So we're very much a fan of the technology and with you guys and work with you on it. So, extreme demand, scaling. We've got some tooling for this. There's also some tooling that you can get from other partners. I hope those are the guys that also saw the other decks, otherwise I'm feeling shockingly hurt as they're all going. So, managing extreme demand. So, I'm going to take you through some of our tooling, some of the tooling that other people have got. And also, when you're looking at a site that needs to scale big, how do you do it? What are the things you need to look for? Is it easy and best to outsource everything? Or should you retain control yourselves? So, I'm going to give a quick cover about interim. Who are we? I'm going to go over what is the problem. And then, some of the solutions and technology options you've got on that. So, first of all, who is interim? So, the first thing is... forgive me my slides are out. So, the first thing is who are interim? Today, you've almost certainly all used us in one way or another. We, as a platform, we run a great big network right the way over Europe and we carry about 40% of all of Europe's internet traffic. So, four in ten of all of the transactions and stuff that you do on the internet, somewhere or another, it's gone over our network. We've been hosting and managed services for large customers now for about ten years. This is something from Gartner, so we're very well respected in cloud-enabled managed hosting. So, if you look up there for Rackspace and other names you may know, we're absolutely in the top right-hand corner. For the services that we do on the managed hosting side, we're very well respected. And certainly within Europe, we're one of the largest. On an infrastructure and as a service basis, Gartner also tips as one of the top 15 providers globally as well. Our customers are quite large and varied. So, at the bottom we've got people like AT&T in Spain, Telefonica, British Telecom. All of these are the big major carriers, use us and our network to provide their services onward for their customers. Within the enterprise and really the online side, Sims Me is probably a good example. Sims Me do a WhatsApp alternative in Germany. So, this was actually something built by Dutch Post. They wanted to build a WhatsApp in Germany that was secure and didn't have any of the regulations that you get from going to a US hosting provider. So, to answer the question about why I have different hosting providers, why not do everything on AWS and others, there are certain quite large use cases for why there are different providers for different iterations and Sims Me is a great example of that. Other big enterprise customers that we work with. So, UEFA, for any of you that have watched any football matches in any UEFA game, so Champions League and all the others, we provide the computing power that sits behind all of the touchscreen technology at the half time of the games that goes on, the logos that goes up if you're watching this on whichever TV channel you're on, and then also their website, the ticketing and even all the hotel systems that sit behind UEFA for every one of their big international competitions. So, the compute stuff and the network stuff that we do, it is very big. So, what is in through VDC? And this sort of ties into the scaling. So, this is our big infrastructure as a service platform. So, first of all, it sits on our big global network. In Europe, we own pretty much one of the largest networks. It links all of the big major cities and it links all of the major telcos. We own everything down to the fibre optic in the dirt in the ground. And that network then goes truly global. So, it goes right the way over to Asia, right the way across the states, and then it also links up over the Pacific as well. The infrastructure platform is classic infrastructure as a service, as you'd imagine. So, it is CPU, RAM and storage in zones all the way around the world. So, you can host your data in any of our zones globally. The thing that starts to make it different to a lot of the other infrastructure providers is that it's built onto this great big platform. It's built into our great big global network. And that means that you can start to look at how you build out your cloud infrastructure in different ways. You don't need to think of cloud as something in a place somewhere on the internet, and I'm going to build this stuff in Europe, and I'm going to build this stuff in Asia, and I'm going to build this stuff in the US, and then you have to work out how to stitch it all together. Once you've built it into our platform, we do all that stitching for you. The compute platform becomes a global compute platform that's all tied together. So, all of your back-end interactions, your backup, your syncing, your replication, your flow of traffic across the world, all of that happens in one big, large, contiguous platform. And globally, as I say, we're right the way across Europe. We're over in Asia, in Hong Kong, soon Singapore, and three big zones over in the States. What we also do is we interconnect with other cloud providers as well, which I'll touch on shortly. So, back to the main sort of conversation list, the problem. It's a great one to have. So, where does demand come from? So, first of all, it comes from either festive stuff. We all know about Black Friday, Christmas, obviously ramps up if you've got a big consumer site. Adverts. These are obviously quite short of burst in traffic. This could be something where an advert could go at 8pm. You may only then have a spike for 15 minutes. How do you scale and how do you cope against that? And then beer, alcohol, that also then drives web peaks. So, within the gaming and gambling industry, the busiest times of the week tend to be between 11 o'clock in the evening and 1 o'clock in the morning on Friday and Saturday evenings, mostly because people have left the pub and the bar have come home and they're just going to sit and do some gambling. There are some other industries as well, which are quite time sensitive, that you can probably guess as well. So, it's all about customers. Customers, these are a load of stats all around. But the reality is, when you've got a user going to your website, you as the developer and the people that deliver the platform to host sites on, it's a huge issue if sites aren't actually responsive and quick. 0.1 second, you do something, the site responds, you feel like you're having an interaction. If it's 1 second, it's sluggish, you think something weird is going on, and if you do something when you click and it's taking even more than 2 or 3 seconds, the site just feels awful and sluggish. When you've got that, you have huge drop-offs on the people really directly because of the performance of your platform. So, demand being far from steady. These are the standard peaks that we've got on ours. Where do you build your platform? Where do you actually scale to? So, do you build for your peak? Do you actually take that your peak last week as your peak next week? And do you do this and look back in an iterative fashion and think, well, I know last month the peak hits with this many thousand users. Is that where we go? Or do you go 10% more or where do you go? And it's a difficult one. With Cloud, you do get extra abilities of scale, but certainly for any of you running your own existing facilities in co-location or in hosting areas, it's a difficult one to manage. How big do you go? And really, this is what you're trying to do. You're trying to perfectly scale between what the user demand is and then how much you've actually got out there. And what you also want to try and do is lead the way so that if you've got a user peak coming, you don't want to wait until the peak has happened and actually people are starting to have a poor user experience to then scale. You've got to be really snappy with the term around. So you've got to try and keep this perfect ratio, trying to lead just before the curve of where you've got your peak demand. So ways to win. You can go big, you can buy, build a massive platform absolutely everywhere, or you can go flexible. And the main thing with Cloud these days, pretty much everybody is going flexible. It's the easier thing to do. Now you've got the ability to spin servers up in minutes and where you've got things like Docker and containers. You can actually start to scale on a far quicker basis than you were previously able to do. So one of the things in scaling and really on any technology, if you are busy building a website, you've got your own stuff to worry about. The core infrastructure that then sits behind it, if there's still huge levels of complexity and a learning curve to have on the infrastructure that sits behind you, it's a difficult one for you necessarily to scale. It may not be your expertise, so you have to lean on others. So some examples on the next few slides. This is how other people do Cloud. So this is Amazon as your rack space and various others. They think of Cloud on a global sense. A separate, different pieces of architecture. So a zone in Europe with an availability zone, another one over in the States, and then another one's over in Asia. All linked together over the public internet. It works. It's all right. If you know what you're doing with networking, if you're good at doing stuff on SDN, if you're good at building stuff like IPsec routes, you can actually start to build out really cool architectures. Amazon, clearly the world leader in this, they're doing quite well and they're certainly not doing all things badly. But does it have to be the way to do it and is it necessarily the best way to build scalable architecture? We don't necessarily think so. In this example here, the bottom end being a database, is it really the best architecture to have things like your database in each particular zone rooted out via the internet? It's something that you control and, again, this may not be your perfect expertise. So the way that we do things, three separate zones, all globally, but what we've done is that big private network, we've linked them all together. So your zone in London is directly and privately connected to your zone in Hong Kong and directly and privately connected to the one in New York. So all of these tie together. Sorry, is that one again? If you've got an AWS between their availability zones, it's not done on a private network. So you typically have to root over the top between their separate zones over the internet. This is done as a private MPLS network that links between the two. But that's a VPC. It's virtual, still over the public internet, though. This is on a completely private global network that doesn't touch anything on the internet. So all of this stuff in the bottom green zone, none of this is on any infrastructure which ever touches the public internet. So a server within these zones here, if they've got two separate VNICs, the top VNIC can root out via the public internet and is out on the standard cloud as you have it. The second VNIC on the back end sits on a private 10 gigwan, which is on its own MPLS core, which goes globally. So the stuff at the bottom isn't any public architecture. So things like DDoS Storms or anything like this wouldn't ever really affect it. Does that answer? So, yeah, bringing you back to this. There are technologies that can help you do pseudo ways of doing this. But the thing that makes this different is from the compute to each zone right the way across them, there's an SLA on this. If you then start to build in your users at this private back end, some of the people here were talking about government contracts, things like we do for the European Space Agency. We tie in with these guys in their corporate wide area networks at the bottom end, meaning that we give an SLA from the server right the way down to the user. Largely similar to the things like AWS Direct Connect and Microsoft Express Route, except we do this stuff globally around. So the next thing, build on the right platform. One of the other things about global scaling is why do you need to scale? If you're hosting everything, say, over in Dublin and all of your user bases in Spain, the latency between the two zones is quite significant. I think it's about 12, 15 milliseconds or so. So one of the things that you're having to scale against is how TCP works for actually the delivery of any particular piece of content. You can use a CDN, but is a CDN necessarily always required if you have a particular regional requirement for your own site? So if you've got a site based in Spain, why host it all the way over in Dublin, if you can host it closer? So latency in itself can be quite hidden cost in how something can scale. Brings on to this slide. So for us in Europe, and again it's a slight differentiator, is if you're hosting stuff in Spain, great, it will stick it for you in the middle of Madrid. If you're hosting things that need to be politically sensitive and they need to be held in Switzerland, great, we can do it in two of our sites in Switzerland. The same in Germany and then the same over in Asia and over in the States. But really for any European colleagues or any European builders here, if you need something in Europe, as a platform it's quite different. And the final last two slides on the product is one where we've compared this directly against Amazon, Rackspace, as you and others. The performance between our platform and the others, there are significant differences. This is in through Amazon and Rackspace, going from London to New York in a straight gun fight on just between two individual servers, all done by third party analysts. So on this on two servers on AWS, 500 megabits, on in through basically twice as this. So one of the things again on scaling, how close are you? Latency is an issue. How fast are you? What's the core fundamental performance of the bandwidth through to the servers? And then finally on the actual platform itself, we do three types of cloud. We do one type of cloud, which is cloud in the traditional sense, so shared machines on shared infrastructure for customers. The middle one is dedicated blades and I've got a customer example at the end of this, I've got time, which is where we can do blades as a service that you can ramp and scale into. So if you've got something that's got a very high CPU requirement, you can take individual blades from us and dozens of those on a platform as a service basis with us. And then finally, if you've still got old architecture, or you've currently got your own hosting environment, you can take from a straight co-location or we can tie into your co-location so you can have a full aggregated environment from public cloud to where you've got dedicated clusters through to your own existing infrastructure. And now finally, on to the next section, this is the right technology. So what's right for you? X as a service, so infrastructure, platform, software. And this, it's got mixed names very much so in the industry about what is the different types. So what is infrastructure as a service? And the way that we see it is the real core, it's an empty data centre where infrastructure is delivered and you take absolute control over everything that is built on top of it. So in this instance, there's network and there's compute. You build whatever VMs you want on top of it, you build whichever routing platform you want on top of it and you tide all together. There's complexity in it because you own it and you own the design on how it all works together. But there's also control in it from your end. We also do a thing called IS plus that other vendors do. We can manage on top of IS so we can do things like managed OS. And the further you go up this stack, the more you're making it easy for yourselves as far as technologies that you don't know. So platform as a service. This is something I think if you look at Amazon, Amazon very much do more platform as a service. They do a lot of tooling, so they do AWS like the direct connects. They do a lot of these things as a service where you can tie directly into them. And then finally you can buy software as a service from various people which can then sit on top. So in this instance, it's basically the van. You don't get to define what or how it does it as software, you just use it as a consumer. The thing to bear in mind when you're thinking about scaling and how you build out your own architecture the lower down the stack you've got the more complexity and stuff you need to understand yourselves, but the more control you've got. The higher on the stack you go, the less control you've got but also the less responsibility that you've got. There were some big outages this weekend. Everybody has outages but the reality was how many CIOs or people in charge of big companies were saying there's a big outage. How are you going to fix it? I don't know. It's up to the provider. And it's an interesting one to think. If you're building out big architecture perhaps look for a multi-vendor approach or certainly look to know how these things work so that within your own break fix plan and strategy you know ways to either get around things or you know ways to work around things if or when there are issues. So another thing that we've done. So Docker, it's definitely changing the way that IT compute is really working. So Docker, you all probably know this I don't want to teach you to suck eggs the old way of doing compute, one server one OS, one application. Virtualisation came along you then had multiple OSs on one particular piece of kit or a multiple piece of kit. It's great VMware have been doing this and there are many other hypervisor technologies but the problem that this has now moved to it's moved away from having an aging piece of server to where that been needed to get replaced on a cyclical basis to now where you're at the basis where an operating system needs to be changed and replaced on a cyclical basis. Within the enterprise area, Windows 2003 caused massive headaches and the reality was was people had got huge estates built on things like the VMware and they were still caring about the operating system that was going end of life. Shouldn't really work that way. The last talk that was on a moment ago where everything they're building on Docker and the reason why it's great is it's really starting to abstract even further away from the hardware. If you're building out something that you want to go from one node to 100 nodes, don't think of going from one operating system and application stack to 100 operating systems. You can look at using something like Docker to mean that you can scale out far quicker and far rapidly than actually having to build out multi OS deployments to give you that scaled architecture and it also means that within your deployment you can be far more flexible, far, far, far quicker. So in our environment, the way that we do Docker, every Docker container sits on core OS and each core OS platform sits on this big global network. So for us to build out a global cluster right the way around the world, it's as easy as you go on the site. You pick the zones that you want it. You pick how many core OS nodes you want in each of those zones. We tell you how much it's going to cost and we then build it out. So in this instance, once you've clicked on this, these are the operations that happen. We go out, we deploy core OS in each of the zones. We then go out, we build a private network that links each of these separate zones together. We then give you direct access into the core OS cluster so that you can then start to roll out your own Docker applications on top. We're also working with a number of different Docker or application vendors so that we can get those guys into our App Store. We're also looking to work with a number of different platform services that can go on top of this as well. But the easy thing that makes this simple is that instead of you having to think of a platform in a zone in a location you can now start to think of a compute platform that goes globally and it's just a couple of clicks to actually roll some stuff out. The other way that you can do load all of the technology that we've got and as with most clouds, everything can be run through the API. So that when you build on our platform you can go through the straight GUI if you want to do it the easy way to put things into GitHub and actually start to automate or use things like Chef or Puppet. In this instance you check your users, you go against it, you look on the load balance of the traffic you then build out the platform and as your users go up, the platform goes up, as your users go down the platform goes down. Meaning that overall from when you've actually got the platform in each zone you can scale through the API and you can then do that globally. So what you can start to do is actually bring a platform taking a three-zone London, Hong Kong and New York and run it on a minimal basis right the way around the world until those particular zones actually wake up and open. Another thing that you can do and another technology is platform as a service scaling. So this is where we realise that inter is not going to be the cloud that everyone uses we also realise that Amazon isn't going to be the cloud that everyone uses. So what technologies can you then put on top of these clouds? So going back to the earlier example where you've got VDC so these are two of our zones London and Hong Kong for instance but then you've also got your other cloud architecture so you may have some stuff in Equinix or Rackspace or some stuff on Azure or elsewhere. You start to build out these on each of these separate clouds to give you resiliency we can do some cool things for instance we for free will link with every other cloud provider in the world so right the way around our network if you want us to link into your existing providers we'll do so and drop free gig ports everywhere. And the reason why we do this is so that our platform can actually start to be a ubiquitous cloud platform that ties it all together for you. But the thing that then starts to be interesting you can start to root all of these together in an easy way. In this example one of our customers Liquid that do a cool liquid DNS platform it both does global DNS round robin following it also then does global scaling via APIs. So you can use Liquid on top of us Azure, AWS and other clouds tying directly into the APIs that you can then actually do auto scaling between providers and when you tie it together on one network it becomes a really easy way to do this. The other thing on scaling is how do you actually scale. So commercial if you look at a lot of cloud providers the way that they build out their clouds is you have small medium large extra large servers and to go from a large to an extra large the significant cost in this. We've done this quite differently we let you build and choose any number of servers in any iterative size that you want. If you want a machine with 128GB of RAM or one core knock your socks off. Likewise if you want 40 cores and 1GB of RAM knock your socks off. The main thing here it means that as you build out a platform when you need to scale scale it in iterative steps if you just need to give slight extra horsepower to individual machines. Don't look or need to do it where you're doubling or quadrupling your assets if you don't really need to. So Photobox one of our example case studies that we've got in Europe you'll probably know them you've probably got a mouse mat or a coffee cup from them. These guys do consumer based pictures on just about everything. Their requirements they were coming up to Christmas and this was last year we're doing exactly the same for them. They wanted a two month ramp-up platform. They need quite a lot of huge capacity the way that they do their application is it's actually taking the JPEG of the picture of the child wraps it around the cup and then it actually does it on the browser so that at the front end there's quite a lot of processing that needs to happen on their front end servers. So they actually looked to do this in AWS they looked to do this in other people and they realised they couldn't do it on normal cloud technology or they certainly tried it and they didn't have as much performance. So what they wanted was non-contended CPU so what did we build? We gave them dedicated hardware as a service still completely scalable we gave them I think it was 47 separate blades each with half a terabyte of ram in it and each with 20 sockets in it and this was done and turned up and scaled for these guys up and overnight for them. So the platform itself gave them everything they needed but still as a service and still only for a couple of months time period. They then kept this platform throughout the rest of the year and scaled it right the way back and now what we're doing for them for this year is allowing them to again scale into this dedicated hardware environment. And the good thing for us is each of these blades and each of these chassis it's the same stuff that we run the shared computer on so for us on our platform it's just a logical change of the same platform across the computer for each individual customer. Scaling through the API all of it says booked resources that a customer does with us and then they basically choose which node to go to. So for instance within a zone they have a particular zone that they can choose to get and underneath that they then get separate zones which is the dedicated equipment that they pull to. Yes they can do. If it's pre-allocated and booked to them but we can still do it in charge on an early basis. So this isn't a straight standard thing we do work with customers on it on how to build it but once we've set it up with them yes they can do that straight to the API. So why did these guys come with us? It's not competition bashing but they found that the performance in us was better than AWS. Partly because these guys are actually based in France and they were previously doing all of their data crunching over in Dublin. So getting closer and building stuff in RDC in France meant it was quicker from them. It was easier for them to do it, it was simpler all the networking that we tied from their co-location to their manufacturing plants through into this was dead easy and also our assist team. We provided this where we actually went in and we whiteboarded and built the solution for them. So it got quite a lot more confidence in how we were working with them. So finally how do we see auto-scaling? How do we see you scale? So first thing understand why you're scaling. What's the actual problem and what do you need to scale for? Don't necessarily think if you've got a slow server you need to get multiples of them why is that server slow? Is it too far away from your user base? Is its own performance contended for one particular reason? Choose what you're going to upgrade and then also look at how you're going to build things. If you go for a platform as a service and do something like AWS great, you can do that but the more simple you make some things and the more you buy from a particular vendor, the more tied in you are when something may go wrong. So we are very much the opinion that if you're going to build out a platform for scale work out how you can either go for a multi-cloud option or certainly work out a way that if you've got to build something with resiliency taken that you've got a way that you're up to work around that if there are ever issues with the provider. So finally make it as simply as you can and make it scale. And please do come down and understand. I would love to give you a demo show you how it works and how it all ties together. Thank you very much.