 A wnaeth gweinidol. Fel y gallwn ei wneud o gwbl yn gweld. Maen nhw'n Wilmoirish. Felly efallai'n ystodai drefd장ol o gyllid adfermau o'r anghof. Felly mae'n eisiau y byd o'r ngheirion i gyfgrifeydd yr enwyd yn eich cyd, o feithio'r gwmannau o'r iawn o'r 15 o 16. Felly mae'n cael ei wneud yw i'w plwy o'r lag efallai, o beth o'r maen nhw'n gwybod i'r gwyfforddiad mewn gwahanol, iddyn nhw bawb eftir ei gweinidol, Bring it all through Docker, how we've implemented Docker, what it is on our platform, and also a brief overview of what is our platform as well nursery. We love Drupal and everyone here says they love Drupal and into ourselves we were a Big Drupal user. We went and redesigned all of our front-end website, on all of our site to actually sit on Drupal so we ourselves are very much a heavy user within the industry itself. Dockers meet MPLS. What am I going to talk about? First of all, I'm going to cover who are in to it. Speaking to a lot of the people that we've met here, in through isn't a brand name that a lot of people know, so I'm going to talk a little bit about who we are. They're going to go on to what is Docker for those of you that do know, I'm sure you know it through and through, but for those of you that don't, what does it mean? What does it mean for the industry? And then thirdly, the evolution of a digital platform, how we are actually tying Docker into our cloud and how that can then help you and your users move your projects, move things forward. And then finally, just an example of how easy it is to cluster something like MongoDB and build this right the way around the world. So first of all, who is in to it? Today you've almost certainly all used us in one capacity or another. We run about 40% of all of Europe's internet traffic. So 40% of all of your journeys somewhere over the internet in Europe, you're going to have touched our network. We're about a half billion dollar company. We've got one and a half thousand people. And from a hosting perspective, this is Gartner's view on cloud-enabled managed hosters. We're right the way at the top right hand corner. So whilst you may not have heard of us, we're certainly within the industry and a respected provider. Our customers are quite wide and varied. So on one end at the bottom, we've got big ISPs and network providers. So customers like Telefonica, they rely on our network to provide their outbound access. AT&T, China Telecom. It has used us for all of their European connectivity. Then within the European space, and certainly within the enterprise of the online space. So Sims Me. Sims Me, it's a branch off of something from Dutch Post. It's a little bit like WhatsApp, but in the German market, they wanted WhatsApp, but they wanted it to be secure. They wanted it to be controlled, and they wanted it to be housed within the EU. So they came to interroot, and that's housed on our compute platform. Other customers like the European Space Agency. We provide them with all of their networking facilities down from the satellite downlinks, and then also all the computing capacity that helps them do with all their telemetry data from the spacecraft. Finally, for UEFA, for anybody that likes football, any of you that watched any UEFA games, everything at halftime where you've got the pundits with their touchscreen technology, saying where the goalies and the players should have gone, that all sits on interroot's compute platform, as do their websites, as do the ticketing systems, and again, things like Air Berlin. So we sit behind big, big companies that live and breathe on the internet. So the product that we've got here is interroot VDC. So it's our virtual data centre, and I'll briefly take you through what is that, because it gives a lot of context to how it sort of ties into Docker. So what is it? So VDC really starts off, first of all, with our great, big global network. You'll see in Europe we've got a huge network. It touches pretty much every big metro city, and it goes right the way from down on the south coast of Spain from Gibraltar, right the way up, and then over through to Moscow and around there. That also is fully global, over through to Asia and over into the Americas. So virtual data centre, it is infrastructure as a service. So it's CPU, RAM and storage in locations all over the world. So far, so similar to every other cloud vendor, bit like Amazon, a little bit like Azure. It then gets slightly different on the way that we've actually gone and built it. So in the back of our network, we've gone and built the cloud into that big private network, and it gives you a huge amount of flexible options. If instead of having cloud in one location, it means you can take cloud and infrastructure as a service globally, all the way around the world. So once you build a server on us in our data centre in Madrid, you're then privately and totally connected to every other data centre around on our network. So if you're building a big global platform, you're going to host it locally, right the way in the middle of Spain, and then it's privately connected right the way over to Los Angeles and right the way over to Hong Kong. And it means you get a different way of how you can build out cloud services. And it means that the integration piece really works a lot easier and a lot better. You no longer need to think of cloud as something somewhere on the internet. And the zone's right the way around the world. You'll notice from a European perspective as well, we've got a lot more zones than any other provider. So if you need to host something in the EU because of data protection laws or because of latency requirements, we're pretty much the only provider that can actually help build a level of depth of infrastructure across Europe. The other thing that we've done as well is by building cloud into a network, the performance on this is massive. This is where we were compared to every other cloud provider, or the large ones at least. So we're in black. Amazon, yellow is your blue. This is the latency difference between building cloud into a network rather than somewhere on the internet. So far, similar differences. When you start to look at the performance and the bandwidth and the throughput on this, there are massive differences. So on the left-hand side, this is two servers on Interroot, two servers on Amazon, and how quick do these things communicate over the internet? We're basically twice as fast as every other cloud provider. And when you're building out big cloud scale infrastructure, big Drupal projects for your end users, performance is a key thing that you need to make sure that happens, and that's really what we've built in the network to be. And then finally back again, we're all over Europe. So again on that European piece, in and around Europe, we surpass every other provider, and then we certainly also do the global connections as well. So on to the main piece, Docker. Docker meets networks. We all know sort of what Docker is, but how does this actually tie in? What happens when you take infrastructure as a service and you want to go global and how you build it all out? So Docker, in the old ways of doing computing, 10, 15 or so years ago, an application, sat on an A server, and it performed an individual task. Then along came VMware and various other virtualisation technologies, and it meant that one piece of hardware could then host multiple operating systems, multiple applications, and then sit on top of it. All good, all great. You then move forward to 2015, and for any of you that have got large enterprise environments or have had large hosting environments that have been going on for many years, you have probably suffered from along the Windows 2003 issue. The reality is what VMware gives and what virtualisation gives in the way of being able to abstract the operating system away from the hardware, it still leaves you with a fundamental problem that for each application you have a fundamental operating system which itself needs patching and vulnerability protection, which is an issue. So what Docker does is it abstracts further away from the hardware. Docker or CoreOS and the other technologies that sit underneath these containers become the operating system at a compute layer across multiple servers and across multiple areas, so that now when you roll out your applications, whether it's an Apache stack, a lamp or something like this, when you're building it, you only deal with the application. Everything else that sits underneath it is now taken away from your care and your responsibility largely. It means that you've got compute resources and applications that you can move around far easier than you could have done when you had to move around an entire operating system. To spin up a lamp stack on something like Ubuntu, you're talking 5, 10, 15, 20 minutes to actually build it out. If you're building out Apache on Docker in a container, this can be done in 10 or 15 seconds. If you then replace the application, you can do it in a much more smaller and easy way to change, so the frequency of change in the deployment can be much quicker. Whilst Docker sort of abstracts further away from the hardware, when you actually then start to combine that with a network connected cloud as ourselves, you start to get some really quite cool things. So what have we done to sort of make cloud easier, and how does that actually tie through with Docker? So the first thing is to look back and look at how do most people do cloud. So for anybody with anything in Amazon Web Services, you've probably got it hosted in Amsterdam and over in Dublin, in Ireland. The way that Amazon and Azure have gone and built their cloud is their cloud is stuff somewhere on the internet, and it may well be hosted over in Dublin, and within this stuff on the internet, you've got availability zones in region. But these things are individual components and individual platforms somewhere out and around on the internet. They're not holistically connected. To connect a zone in Europe, to connect a zone in America, to connect a zone in Asia, you need to start doing things like IPsec or software defined networking. And it adds onto the complexity of what you're trying to build. If you're building out big web scale infrastructure, concentrate on the web scale stuff and let the infrastructure itself flow. And when you start to look into building a multi-cloud option, say if you've got some in Amazon, some in Azure, some in Rackspace, some in us, this stuff starts to get really complex. How do you actually start to tie all of these clouds together? On one hand, where you've got Docker, abstracting further away from the actual hardware, it's making application deployment really easy. But on the other end, where you've got all of these clouds, operating as separate, distinct clouds, it means that you're having to understand more about how networks and routings are actually built. So you're sort of taking away from one hand and giving on the other. So this is the way, as your others do cloud, three separate data centres, all connected over the public internet. So you've got to use SDN, IPsec, or other technologies to connect them all. It's great if you're a network engineer, it's dead easy. If you're not a network engineer and you're concentrating on application development, these things can be quite difficult to get your head around. And also from a security perspective, they can be quite tricky. And you've got a database at the bottom tier of these applications, and you've got this routing over tunnels over the public internet. If you screw that up, you can actually be letting out your core infrastructure and your core backend database be open on the internet, not necessarily the best way of doing things. So what we did, we took the way that you do the operating system, you took the way that you did the hypervisor and we then built it into the backend of a smart network. So the smart network within each of the sites, the machine belongs to a VLAN. The VLAN has a subnet within it. The subnet within it then relates to something called a VRF, which in effect is a private rooted network across all of the zones that we've got. This next slide makes it a little bit more clearer. So taking the three same zones, so this could be Hong Kong, New York and London, for instance, all locally connected out to the public internet, so they've all got the ability to root out locally. But then in the bottom end, they're all then connected to a great big private network. And what this means is that each of your servers, if they've got a backend database requirement, so whether it's failover, replication, backup, storage, all of that happens at this green network layer at the back, safe and secure, and all of your local public internet happens at the top end, where you need to get access out to the public internet. And finally, just one thing on the platform before we get back to Docker, the way that we've got to build the infrastructure as well. We've got straight cloud, which is shared machines on a great big shared platform, so you get all the RAM and the CPUs on a small contention ratio. But then we also do things like blades. We do blade as a service. It still comes with a hypervisor. It's still infrastructure as a service that we manage and look after. You still click and take and buy it through a GUI, but the blades are dedicated to yourselves. So where you've got big high-end high-performance sites like ESA, UEFA and those sorts of guys, you can take blade chassis and still build them as cloud infrastructure. And finally, we tie into all normal co-location providers as well. So if you've got some old hosting as well yourself on old equipment that you own, we can migrate that and bring that into our platform. So anyway, back to the point, using Docker. So now if you take Docker, and you've got Docker in London and New York, you've got CoreOS, which is basically the OS that you can run Docker on, and you want to link these two things together, it's quite difficult in a typical normal cloud basis because you've got to build the routes between the separate zones. On Interroot, however, it's really easy. With each of the CoreOS machines that you stick in each of the zones, they each live on their own separate VRF, so a private network that links them together. This at the top is a nine-node Mongo cluster built on CoreOS. You'll see they've all got 10.0 addresses. The 10.0 subnet exists in each one of our separate VDC zones around the world that we've got. Each one of these CoreOS nodes that sits within this implicitly, directly and privately roots from zone A to zone B to zone C. There's no need for any more knowledge or technology for you guys to know other than you've got a machine and you place it on a subnet, and that subnet is automatically aware of everything else around the world. So this is Mongo all over the world, all tied together. So now how does it work? What does it look like? How do you actually connect this stuff? So it's quite easy. We're building a newer, sexier, easier interface to this. So if you've got a requirement where you want to do a big cluster of CoreOS platform right the way around the world, you want to tie it all together, and on top of that you want to build your containers. You don't need to do with any routing. You don't need to deal with any complex building of SDN or IPsec. All you do is you go on to here, you pick and choose the locations, and you pick and choose however many CoreOS nodes you want in each of the locations. We'll then tell you how much it's going to cost. By the way, these are example costs. Come and see us at the site and we'll give you better pricing details on this. And that's all you've got to do. So in that instance, here we've got five zones. So you've said, I want a CoreOS block in five zones around the world. You click go. What we then do, we then go out and build CoreOS in every single one of those zones. Each of these CoreOS platforms in each zone lives on its own separate subnet. We then build out a global private network that links all of these right the way around the world. So at this point, the CoreOS node in London can privately communicate with the CoreOS in New York and all of the others. And as soon as that's done, you then get straight SSH access into the middle of the CoreOS platform. And once you've done that, you can then start to deploy your Docker containers right the way around the world. All of this done easy and it takes four clicks to actually get there. So you've gone and built a great big globally scaled out platform really, really easily. Once you've done it, you can then run some pretty cool things. On top of this, you can then take the latency between each of the zones, which as is on a private network is absolutely, absolutely steady or far more steady than you'd get on the internet. And you can then start to do rule sets about which is the primary, which is the backup, which users should belong to the local zones. And then finally, where you've actually got all of these actually built out, you can then use our built in firewalls and the built in load balancers to take users within the Hong Kong area to serve your customers from Asia. You can then take your users from the Americas to go through New York or Los Angeles connection. All of them pulling back in and all of them using in this instance a MongoDB and accessing that data globally through all of the zones tied together. So in summary, containers, they're great. They're going to change the way that we do computing. I think if you look at VMware now, my personal thought on this is them as a provider in the virtualization space. These guys are going to have to have a slightly different dance I think in the next few years. I think VMware as a platform for hosting operating systems on tin, they're going to move slightly away from this. While Docker is the sort of pretty girl at the party at the moment that everyone's talking about, the reality is I think technology is going to move far more towards containerized deployment. And it's definitely going to change the way that we build out cloud infrastructure. Where you take something that simplifies all of the technology underneath it, like Docker or containers, there's no point simplifying the architecture at the top end without simplifying at the bottom end. And so you need to find providers. We do it. Some of the people do it as well that can actually help do this. If you're simplifying your architecture, make sure it's simple front to back. Once you use containers, once you built it into a simple platform that scales, it's quicker, easier for you guys to go global. You no longer think of hosting something in one data center or in one zone globally. And finally, the way we've gone and built out cloud, it's private by default. So don't think of cloud as something somewhere on the internet that needs to be publicly accessible and open to the public and the internet being the only way to link zones. We've gone and built a great big private global network for hosting sites like all of your customers have got. But really with the segregation built in that back end databases, back end secure services should stay private and should stay away from the public internet. Finally, come and see us downstairs. We're in the hall on the left hand side as you come in. It'd be good to give you a demo on this stuff. It's really quick, really easy to do. The whole thing can be done either through a GUI or through a simple API. You can script it all so that you can build all of this stuff automatically. And it'd be great to show you any questions at all. I'll take silence. It's golden. Thank you very much for your time.