 Thanks for coming to us today. Basically, the idea of the conference is to tell you the story of how we built Runabove. Before, I will give you some context of what Runabove is. So, basically, we are part of the OVH group. It's a major hosting provider in Europe. We host 180,000 physical servers in 17 data centers, mostly in France, but also in North America. Nikola and me are really happy to be with you today to share the story of Runabove. Runabove is kind of an internal startup within the OVH group. We are only a few dozen behind this new product. Basically, it's the first commercial public cloud offer of the OVH group. OVH had built multiple beta products in the public cloud offering during the last few years. But we decided with these different teams all across Europe, in Poland, in France, but also in Canada to start fresh and to start fresh using OpenStack. The idea is to build different types of services using this OpenStack implementation. Currently, we offer two types of services, public cloud computing, so basically instances you would get by the hour, and public cloud storage. But as you can see on the graph, the project will go further very soon. We aim at providing services such as bare metal over OpenStack, or things like that. We will go back to that afterwards. To go into what we will tell you today, this is basically the plan of what we are presenting you. We will speak about this different part for about 30 minutes, and then we will have a few minutes to take your questions. We will now leave you with Nicolas, which will present you as a network part. Hi everyone, I'm Nicolas. I will talk about the technical aspect of Runabove, starting now with the network. So, here is the map. OVH, the group, has a few data centers across the world, mainly in Europe and North America. We have connected these data centers with some fiber, which goes to 3 terabytes per second, its capacity of our network. And here you can see 19 points of presence around the world, mainly in Europe and North America. It's a point of presence where we peer. So, for Runabove, we decided that if we want to make a powerful cloud, then we need a powerful network. So, to do that, we've designed our network from scratch. On the left side of this slide, you can see a scheme of a rack. So, entering this rack, there is this OVH3 terabyte network. And on the top of the rack, we have a switch that can commute up to 1 terabyte per second. Then, all the compute nodes running Nova have a dedicated network interface for the customer, which is 10 gigabits per second. On the right side of this slide, you can see a switch and our object storage, which is also connected to this network. But with the great network comes great responsibilities, as OVH already found out, customers right now are not being nice with each other. So, sometimes there is DDoS attacks. We used OVH internal system for that. It's called VAC. What it does basically is that it vacuums clean the incoming DDoS attacks. It does that by scanning the packets incoming in three different points of the world. And when it detects something bad in a few seconds, it removes its back traffic from the traffic incoming to the customer. So, the customer having servers behind that doesn't see anything. And of course, we needed the other way around, which is to prevent customers from running attacks against the outside of OVH network, but also against other customers from OVH or even from run above. This is done currently with DPDK. We scanned the network and when we find that something is bad, we just suspend the VM. As a story, when we went out of beta this summer, the first thing ever a customer did was to rent 20 VMs with 10 GB network and just launch a DDoS attack against another customer. So, we needed this feature from the first steps off from above. About the design of the host we used to make this network powerful, we decided that we needed two different network adapters. The first one on the right side is the one used by the user. It's only for him, so he can do whatever he wants with it, except of launching DDoS attacks. But he has all 10 GB per second for him. But then on the other side, we have another network adapter that we use for administration, monitoring and also for our self clusters. I will talk about that later. So, last point about the network. We are currently building something from scratch, which is very exciting. It's a router because we have a big partnership with Cisco, but they don't actually provide the hardware we need right now. So, we decided to build a router from scratch and this router is currently a work in progress, but it will mainly integrate run above within OVH VRAC. For those who are not familiar with what VRAC is, mainly it allows OVH customers to have private networks between all kind of services within OVH, like dedicated servers, cloud servers, run above instances, whatever. Thanks to this router, we'll provide VPN access, load balancing, IP failover and firewalling. So, the aim of the presentation is to show you how we build that and how you could get more information on your own open stack implementation. But there is also a few different ways we differ from that. And the first one is that we offer as a public cloud provider the possibility to host your VMs in different geographical regions. So, basically, we currently have only two data centers available for run above at the moment. We offer our North America data center, BHS, which is in the Canada just next to the border of the USA, and Strasbourg, which is in the central Europe zone. We have plans to offer also availability for run above VMs and storage in other OVH data centers, but also in new data centers on the west coast in the U.S. and in South Asia. So, this is a plan for 2015. It's important to say that we chose to use open stack also because this means that you basically can add your own data center on premise and even data centers from our competitors and control your VMs in those different services with the same API. So, we are really happy with that if you use VMs within run above in one of these two data centers plus one in your own data center, for example. This is perfect for example if you want to build a disaster recovery plan or if you want to handle some more charge. Of course, the idea behind having different data centers is that you can bring the power we give back to your customers by being close to them. Another way we differ from some competitors is that after discussing with our clients we identified many use cases for public cloud offer and we decided to build a different line of VMs adapted to this specific use case. So, as of today we offer three different offers, sandbox, dead fast and one VM per host. Basically, the sandbox one is designed for staging or testing and it can also be used for non-critical apps. For example, if you have an app monitoring your website or any apps that you would be happy to trust on something that has no warranty resources. Most of our customers would go for the dead fast offering so it's intended for production. It means that it offers high availability and dedicated resources. And we were the first one on the market to offer a public cloud where you would get one VM on one physical host. That means that you have no noisy neighbor. You benefit from all the power of the host to your own. So, what exactly is behind these three different offers? Basically, on both sandbox and one VM per host you benefit from local SSD. So, on the machine you're sharing or on the machine you're the only one on. So, you have great IOPS. But if you choose the dead fast one, you would have a little less IOPS but higher availability because we use distributed storage here. Nicolas will get back to it after to explain you how we build that storage. And also, because you would use the product differently, we include some free traffic depending on the offer issues. So, this is an equivalent of the free traffic you would get for one month using the VM. We also were the first ones to offer a public cloud solution using different architecture. So, you only create one account on one above and then you can spawn one or many VM using either standard x86 setup. So, it's based on Intellexion CPUs. Or you can also spawn instances by the hour based on the power rate CPUs from IBM. This is just a quick video showing you how it is easy to launch a VM. And the easy idea behind this video is to explain that again, for each use case, we wanted to offer the best experience to the customer. So, for example, here we will launch power rate sandbox VM. And the idea behind the sandbox is that that's a VM you would start and stop very often just to test a new version of your software, for example. So, it had to load really quickly. So, the idea is that you choose a data center you want to spawn it into. You choose the type of instance you want to start. And then you will have to choose the OS. So, at the time the video was made like two weeks ago, there was only one OS available on power rate. But now we also offer Ubuntu when we are working with Suzy and Debian also, maybe to offer that over power rate. And as you will see, the instance will be ready in less than 30 seconds. So, this is our easy panel. We also offer Horizon, but Nicolas will get back to that afterwards. Here you are. You've got the IP of your instance, the public IP of your instance. And you can just log to it using any terminal. And there you are. Basically, you should have the answer. Here you are. So, now Nicolas will get more in details with the hardware you will actually get from us. So, after speaking about network to build a powerful cloud, we needed a powerful hardware. So, we decided to go with the latest Intel Xeon. The run at 3.7 GHz, there is no magic behind that, just that in comparison to our competitors, one V-core on run above is usually faster than what they offer. So, again, there is no magic, just it's faster. But the magic is here with our partnership with IBM. We've got brand new Power 8 CPUs and they have 22 cores running 176 threads. So, it's a brand new thing. You cannot find that either on bare metal servers or anywhere else. It's a preview in realm of and it's really good for people who want to test application that have to scale and use a lot of different processes, threads. We see a lot of people testing their app on these machines for that. Here, these two pictures show a bit what OVH and run above is about. It's about hardware. Because it started 15 years ago when Octave started building its own racks, then its own servers, then its own data centers, network from scratch. With that, it allows to put more servers in the same rack. And here we design racks with materials that we build. On the right side, you can see our own water cooling system. It was designed many years ago, but it's still running very well. It allows us to run our data centers with poor user effectiveness of 1.09. It's quite great. There is no air conditioning at all in those data centers, just water cooling. About those poor 8 CPUs that we showed earlier, the first thing we did when we received the IBM servers was putting water cooling on them. About drives, because for powerful cloud, well, you need powerful drives. So, we decided to go with CEF for providing distributed network. So, what we do actually is that we put CEF journals on very powerful SSDs and use a replication factor of three. So, if one node fails, the cluster can rebuild itself. And as it's distributed amongst a lot of nodes, if one fails, there is no big issue with that. This number, we're quite proud of it. It's how fast you can upload objects to Swift from a run above instance. So, if you chunk your file in many pieces, upload them in parallel, you get this result. This is thanks to the network card we saw earlier. And in the cloud, we know that many users are not familiar with OpenStack, and in fact, they don't really care about OpenStack. They just want to get things done. So, they want servers working, running. So, we decided to allow them to use whatever technologies they want to use their servers. So, of course, we provide OpenStack vanilla API. It just works with all software in the market that is designed to work with OpenStack. We provide the run above API, which we use internally for billing and things like that, managing accounts. We provide the simple dashboard that we showed earlier. And we provide, of course, Horizon, which only difference with the OpenStack one is that it's not blue, it's orange. About that, I will present you just a quick setup to show you how it's integrated with run above. We'll take the case that we already have a client and we launch a server on the top of that using private networks. Here you see that we're on Horizon. We already have one server running Debian 7. It has both network, external and internal. And here we are launching a server that will be a web server. We use images available on run above, so here Debian 7. Just add a keeper. Just add networks. And we'll create a post install script that will just install litey, a very small web server. And we'll, in a few seconds, curl that to get it working. So here it's just installing litey with APT, nothing very fancy. But what we are proud of is how fast it launches. So here after clicking on launch, basically in a few seconds, about 30, you get your instance running. This was done on sandboxes, so very, very small instances, very affordable. And running SSD disks. And you see that in a few seconds we'll just have our instance launched. So here it's boning. I think that many of you already know that. But for newcomers, even Horizon can be disturbing. So that's why we designed the previous control panel we showed earlier. Because some users, even Horizon, it's confusing for them. But here we have already an IP and we have SSH on the client. We'll just curl the internal IP of the server and it will just work seamlessly. So in less than 30 seconds we have the result of the request. What is that? Well, for those who know programming a bit, it's Gophers, the symbol of the Go language. Why do we show that? Just because internally we use Go quite a lot. Why? Because it's fast, it's scalable, it can be distributed very easily. As a real-life example, we have currently two regions. When we have requests to make on those two regions, we just run them in parallel. And it's very easy with Go to do that. If next year we get 20 regions, it will be the same. Here, I will show you. It's a tool I've personally worked on. It fixes a problem that we had when we designed the beta version above. It's how to get images to work on public cloud. Because once you're done with setting up OpenStack, Nova and Glance, well, you need to run instances. You need some distribution to run on top of that. And what you can do is Google Ubuntu OpenStack Image and you will get something. Something, but it's not your own image. It doesn't have your software, it doesn't have your touch. In OVH, we already have 60 distributions available for dedicated servers. We thought, well, maybe we'll use that to run above. In fact, we designed this tool, which is just some Python, that takes this available software that is already done in OVH and adapts it to the cloud. So it creates a cookout image partitioning on the top of that file system and install the kernel, install cloud-init packages, whatever we need. We upload that to Glance and it's working. It takes about two minutes to build and it's very easy. As an example, two weeks ago, we had Ubuntu 14.10 that was out. It took us seven lines of Python to provide Ubuntu 14.10 on run above. So basically, that's all we have behind the scenes, but I wanted to conclude by showing you what our customers, some use case of what our customers would actually do with run above and with our implementation of OpenStack. So basically, as you may know, Europe was a bit late to public cloud computing. So what we see is that much of our user would use it next to bare-metal servers for the moment, for example, to handle charge peaks on traffic, on their website, for example. So this is just a quick schema of a customer that it's a famous support website in France called SoFoot. It's a French magazine about football. They were already a customer from the OVH group for many years and they use cheap bare-metal servers for basically using HA ProC to distribute the traffic. They also use more expensive ones from the OVH brand to handle their web server. And during the World Cup, they saw their traffic multiplied by three or four or five some days, some hours. So basically, they would just run above VMs and add them to the pool so that the customers would still get a very quick response from the web servers. This is a really typical use case of run above. It's important to tell you because we are here to tell you the story of the product that run above is also built and used by internal teams within the OVH group. So I told you earlier that we were late to the public cloud, but basically we offer what we call VPS. We have been offering them for a few years now. So basically, it's really affordable VM. You would pay by the months. And we are in the process of moving the underlayer of the VPS to run above. So it will be completely transparent to the VPS customers. They will just get more power for the same price. And run above is also used by an internal team called Spinoff internally. Basically, the idea is to rewrite the billing parts of OVH, for example, so that it's more easy and the customers get more features from it. And again, they will use run above to a stat. So you may have many ideas of how to use run above. And we already saw some crazy users having really imaginative way of using the product. So if you have some, do not hesitate to share them with our customers. We may also reward you if you do so with some free credits. And well, to conclude, the idea behind run above is to... Because it's not a story that is finished. We have still many things to offer you in the near future. So the idea is to offer you always ever more performance at never less expensive price. So that's all we had to tell you today. But we are happy to take your questions. We've got around 10 or 15 minutes to take your questions. Do not hesitate. We also have some vouchers for those of you who want to test run above. Do not hesitate. If you've got questions about how we used OpenStack, how you could use OpenStack internally or to offer an alternative purely cloud, we would be happy to answer you. A question? Yeah. Yeah, actually we are working on offering archive service. So it will still be based on Swift. But the idea is that you would get ever cheaper storage but with some delay to get back your data. And that's what we are about to offer in a few weeks. We also have in the lab section of our service, so basically it's what you would call beta, Docker as a service. So that's something we are testing and we will provide soon. You can already test it for free on run above.com, and the idea behind run above is to connect that to the other service of OVH and as I said quickly earlier to offer you bare metal within run above and also to offer you private cloud within run above. Any other question? Yeah. So currently we use Nutran. It's distributed over the different host we used and we use the sysco hardware for the moment but as we said we are working on a new router build internally. Another question? Yeah. Well the problem is not if they are cost effective for us but for the clients. So as Nicolas said earlier it's basically we are not replacing X86 by Poright. We are just offering both of them to the customers and we are actually doing, we are at the moment doing benchmarks over Poright and for some use cases like for example distributed mass scale computation or some big data usage it's really really effective for the customer because basically on only one machine you can scale vertically really really high. So some applications are not easily distributed over many hosts so for that specific use cases that's really interesting for the clients. Is there another question? Okay, so do not hesitate to come to us to ask question one to one if you want so and do not hesitate to come to grab a voucher or we've got also some free t-shirts for those of you who may be interested.