 Good evening everybody Thank you for coming. My name is Mariano Cugnetti. I'm the CTO at enter Enter is an ISP in Italian ISP. We are a company based in Milano, and today we will talk about the superpowers of Cloud based on OpenStack first of all Let me introduce the large set of technologies that are required in order to build a successful public IAS We are running a public cloud on OpenStack since 2013 But before we have experienced a lot with OpenStack since the cactus release Providing VPS services. These are just a small list of the technology involved in our environment Which require so many skills that sometimes the younger they are the engineers The best is the the result the better the result you get from them We are a team a very small and agile team We started with six engineers now we are ten and the average age when we deployed this environment was 22 years except for me obviously Since we are an ISP we come from a strong network Experience and skill we deployed our cloud because we already had a large network Spreading across Europe. We provide our customers which are mostly Italian companies They which have their headquarters in Italy. We provide with VPN MPLS VPNs across All around the world and especially across Europe In order to be in total control of the networking supply chain We decided to move the closest possible to the customers by activating our point of presence in the Main internet exchange in Europe. That's why we asked to local access provider like British Telecom or Telefonica or other smaller and more agile providers to deliver L2 traffic In these pops. That's why we set up our pops in Milan obviously where our data center is and in Frankfurt close to the deserts Which is the main German internet exchange in Amsterdam close to the m6 Which is the main internet exchange of the whole Europe in? UK close to the links and in France close to the France six We had these five pops and then we ended up asking one of our Co-location customers in Italy, which is bergacom to provide a 10 gigabit ethernet ring a metro ethernet carrier Link across all of the five pops. So we are running our cloud With the silos which are deployed into pops that are connected at a layer to a 10 gigabit ethernet ring. This is not an OPEX. This is a KPEX We actually bought the link. So we are we really own the network and Having a dedicated network means having very low latency between all the nodes and the regions and they've been very large Bandwidth to play with everything you will see after So this is the map of our network You can see the pops in Europe and you can see also the connection we have in Italy and in Europe Super services means as Randy by a store said once OpenStack is not a cloud platform. OpenStack is The core is the kernel of any cloud service you can you may want to build That's why we decided to design our offering based on OpenStack and other services Obviously, there are the main services running in OpenStack computing. We run KDM It's a very powerful and efficient hypervisor. It has a very low footprint compared to VMware and to Zen server and It's so it can use the hardware in a more efficient way we do not do over provision because the hypervisor can by KSM or ballooning can optimize the usage of RAM and virtualization In computing we also consider to have glands Which we use to provide templates and to provide snapshots all across the regions How does it work every time a user? Takes a snapshot of an instance. It is the image file is copied to the object storage Which is a large cluster Spending all Europe and so all the copies of the file the image file are copied to the other sites in Europe So if you just change the region you are working on you will be presented with the same snapshots You have just taken in other regions. So it's very easy and very Immediate how to you can snapshot a whole infrastructure and restore it in another region for redundancy and distribution storage it took me At least two or three months to decide whether to go self or to go swift both had very good advantages and We decided we wanted to keep both we use swift because we have a partnership with siftac and At the time when we went online Self was not providing geographical distribution and was not going to deliver storage policy which storage policies which swift does and allows us to Let the user decide where whether the the data inside a container must be replicated across all the cluster so all across Europe or must be contained in one single region for Regulation or for security reasons. So our customers can decide when they deploy a container whether they want to Spread the data across their Europe and the cluster or not on the block storage side Which allows us by tweaking The caches on top of the the net to the the storage nodes to reach at least 12,000 I OPS per single node, which is a quite amazing result compared to enterprise Storage systems We use only commodity hardware So these performances are achieved on top of super micro super servers Which are quite common in the open-stack environment Obviously, we also run ephemeral storage inside the nodes And we decided to have all the storages split across different platforms So in in case of fault of any component you can have redundancy of the data networking since our experience with Cactus Diablo and Essex Showed us the limit of the v-land Plug-in to design overlay network. So we decided we would have gone in production with the VXLan Which at the time of the of the launch of enter cloud suite, which is our product Was in a alpha. So it was very very very very Experimental we decided to do this and by the hell with the help of melanox technologies that provide the hardware both off from the network here from the switches to the nicks we reached a very efficient level of offloading So the CPU is not loaded with the compression UDP acceleration. That's required in villain in the XLan Will we run a pure open-stack pure neutron L3 and open v-switch? Solution we don't run any proprietary Plug-in, we didn't want to all of our cloud of all of our installation ranging from Nova to Swift to Any component is made up by open source components DNS DNS is the cheapest most affordable and available system to provide the Geographical load balancing across regions since we have regions in different Places in Europe and since our users are replicated through a Galera cluster across all the region Every user can just switch to one region to each other with no real authentication needed This is Possible also because we use couch base to replicate tokens across the the regions Moreover having a cluster distributed cluster. We can let the regions communicate each other the DNS is Fundamental because if you want to have infrastructures, which are distributed across regions you need to provide some flexible tool to balance across Your installation so we decided to get the best from route 53. I mean the API support the Configurability and the load balancing ha and geo functionalities But we decided to get the most out also of Dean DNS Which was the only one providing a DNS based on an anycast network So since we own the network and since we come from the network we are skilled on this So we decided to build an anycast network. So our DNS out our Authorities of DNS are exposed on API's on an IP Which is like 8888 by Google? So every user just called the same IP to request to forward his query to our DNS and is Automatically wrote it routed to by BGP to the closest DNS This allows the users also to use geo IP Gip DNS That let the user interact with the closest instances so you can have multiple Resolutions based on the location of the user on the network You can also configure load balancing weighted load balancing You can have a 70% of your traffic going to region in Milan and 30% going into the region in Frankfurt if Milan fails automatically the system detects it and Routes 100% of the traffic to Frankfurt The DNS is running on an engine that we developed in house and it's based on a scala technology and We are particularly proud of it because every DNS query takes at at maximum three milliseconds to be answered So it's a pretty good pretty fast as a DNS service and the last thing You need in order to distribute your contents Object storage is fine for a lot of purposes, but when it comes to distribute static contents software updates, etc You definitely need to have some caching Local and close to the user and you need to have low latencies in accessing the static data Obviously we could not build A worldwide network to provide CDN so we partner with the hibernia networks with which is a US company that acquired at reto a very known company from Holland and They provide us with 200 pops all around Europe with their proprietary any cast network And they provide CDN services based on a proprietary solution for CDN Interfaces are very important. We think that most of the users here are very familiar with with Horizon but in some cases you find users there are Newbies or quite new to the cloud so they may get stuck on building the network and building the router configuring the allocation pools and so they require for something very fast where Difficult decision are taken by someone else. So that's why we decided to develop something close to the Typical service VPS service for Developers so that they can be very fast and develop in deploying Instances and infrastructures on the cloud this does not mean it's Simplified it's easy to use it, but you find out all the functionalities you have in horizon Moreover, we developed our own html 5 Interface for horizon and for object storage for swift anybody who had used the interface on horizon for Containers knows how difficult is to cope with it. So we decided to rewrite it from scratch And other interface we are running is a scalar. We have a partnership active with scalar We are listed as cloud providers in the scalar comm and we also provide the local installation of scalar to our users. So they can access to Self-feeling autoscaling functionalities inside our cloud Super management means that we can play with developers. We need very simple Test and via test and dev environments. They don't want to cope with infrastructure They don't want to plug too many things in their infrastructure. They just want servers and start And boot up immediately, but we also work with companies and that's why we are exploring the world of enterprise Because the managed cloud is something that the big ones like Amazon and Google's are not covering so that a niche It's a niche between giants, but it's an interesting niche for Companies like us to investigate into Hardware is the one of the most interesting challenges we are facing We as I told you we worked we started working in 2011 We dealt an HP, but the more we went over Working on Opistak. We understood that we could carve our own platform Dell and HP and the vendors in general provide the multi-purpose service, but when you need specific Performances for example for SAF or for example for Swift It comes out that the multi-purpose is not fitting anymore So you start investigating how to build your own configuration and that's why we ended up working with Supermicro The next step is not only Assembling the hardware but building ourself in Milan We run a make a space and the co-working space so our idea our hacking attitude is quite Developed so we like to do it ourself And that's why last week I was at the open compute summit I had a keynote there and we are collaborating with the open compute project and we are we have delivered deployed three regions out of five and London and Paris were planned by the end of 2014 we delayed by three months because we want to deploy those regions with open compute hardware And so that's why we have to think about the platform Obviously more challenges are coming and we think that The way the cloud is standardizing on top of it of the user experience Is something we need to support and that's why we are Working a lot on Docker because we think that the moving Workloads on our cloud will be made mainly with Docker both on public clouds and private clouds So that's why we we are working closer with Docker to support it in our cloud That's it. Thank you very much. If you want to visit us We are at boot 34 if you have any question feel free to ask Okay Okay, the question is did we find any issue and talking we in working with the token replication the answer is yes Yes, because if you don't replicate them correctly You end up reauthenticating every time you fail to authenticate both on horizon and on the APIs Moreover, there was a bug into catch base that was very difficult to find out. So Updating to the latest release we solved the the problem so but couch base definitely was the solution to our problem the latest release Any other question? Okay. Thank you very much. Thank you for coming