 Hi everybody, and thank you for coming. My name is Mariano Cugnetti, and I'm the CTO at ENTER. And I will drive you through a quick journey across our ENTER Cloud Suite product, which is an open-stock cloud in Italy. ENTER, my company, where I am the CTO, is based in Milano, Italy, and was established in 1996. We have always been an ISP, so playing with data centers, services, data center services, and networks was always our business. When we came to the point to create our own cloud services, merging our traditional markets like data center and networks, the choice was open-stock for its openness. We come from the open-source world. We like the approach. We like the project. We have been working on open-stock on the last three years. So we liked all the philosophy inside the community. So we came up with an infrastructure that is now based on both open-stock and Grizzly, and we have deployed the typical standard, this is very important, standard open-stock services. We are running computing on KVM. We are running storage, both ephemeral, on the directly on nodes. We are running block storage with SAF, and we are running object storage on Swift. We also decide to avoid any vendor lock-in, and we decided to stay open-source also on the network side. So we started with VLANs, our Grizzly environments run on VLANs as an overlay technology, but now we have switched to VXLAN for the Havana installation. So we provide the standard no-homebrewed solution for open-stock, running Neutron, Nova, Kiston, Horizon also, Swift and Cinder, and we use Cilometer for accounting and then billing by our own billing systems, and that's already running in production. We think there's a lot of talk about moving workloads, interoperability, and sometimes it tend ups about talking about moving VMs. We don't believe this will be the future of will be a solution to for this problem. This kind of approach is going to change in the next months and next year, and thanks for two new technologies like Docker, and in the meantime, we think that interoperability between private and public or different cloud platforms will be provided by interfaces, and that's why we decided to provide obviously API access to developers to develop their own infrastructures, or their own interfaces, or provide the CLI access, but we also provided different interfaces. One of them is the most known interface for open-stock, which is Horizon. We're on the Juno Horizon in production, and we had partnered with Scaler. Scaler is a very smart company that provides an obstruction level, an application to handle roles inside the virtual farms that may spread across clouds and regions, different clouds in different regions, making very easy for the users, even skilled ones or beginners to cope with roles, infrastructure, large deployments, small deployments as well, and we are pretty happy with that, and more interfaces we are developing about how we talk later on, and this also is in production. Since we come from the ISP world and we are a telecommunication operator, we also believe that network is very important. Without a very powerful network, a very reliable network, you cannot have any cloud working, so we decided to leverage a lot of different capabilities inside our cloud. As I told you before, we use the X-Line in our neutral environment as an overlay networking technology. We have enabled a lot of licensing as a service, but especially VPN as a service to connect private, virtual private data centers to real, physical, or virtual in on-premise data centers. We are planning to do it also not only by IPsec, but also by MPLS, and we have partnered with the main European apps to provide access to our regions, which I will show you after, so we are directly connected to the Italian Internet Exchange, the Netherland Internet Exchange, the German one, the French one, and the UK one. And on top of the network, we provided any cast IPs for some services, for some core services that requested low latency and easy of access. And this is in production already. You cannot talk about cloud if you are not talking about distributed systems, and that's why it was obvious for us moving from one first installation in our data center in Milan to a distributed one. That's why at the moment we are running in three regions, and by Q3 we will have five regions running. Position in Milan, Italy, Frankfurt, Germany, Amsterdam, Netherlands, London, UK, and Paris, France. All of these regions are connected by a 10 gigabit Ethernet ring, which is dedicated to us, and which we bought directly from Begacom International Service Carrier Services, and this is already in production. And last year, I remember Randy Baer's talking about OpenStack not as a cloud platform. OpenStack is like the kernel is for Linux, so it's a core on which you can build a lot of services on top of it. And that's what we did. We started with OpenStack. We deployed OpenStack, a pure OpenStack installation, but then we moved to the second step, which was to deliver more services on top of our OpenStack installation. When you have to cope with distributed systems, the most reliable and efficient and affordable load balancing system you can use is DNS balancing. We decided it was more convenient for us and more efficient to develop our own DNS engine. So we are not running on power DNS or bind. We developed our own DNS and we provide APIs so we can provide a DNS as a service to customers. Obviously, we provide also an interface, a web interface to manage your zones. You can create standard records, but you also can create smart records and you can create load balancing records. You can assign different IPs to the same record and when an IP or an instance or a load balancer goes down, that record is erased and removed from the pool. Automatically. You have health checks. You can configure. You just check the availability of your service and then when they go down, the records are updated. You can have HA, high availability. You can define a primary endpoint and when the primary endpoint goes down, you can have a secondary and if you use auto-scaling on the secondary, you can keep the bill low on the secondary until it's needed to grow your infrastructure. And then you can have geo DNS. Most of technology used for doing geo DNS rely upon geo light, geo IP, which are tables. The IP, the IP addressing market has got so fast that any table may be very slow to update and give unreliable responses. That's why we decided to rely upon the most reliable technology, which is BGP and we use BGP to route any cast IPs directly to our DNS. So, we provide one single IP for our DNS, which is routed to the closest DNS to the user making the query to the DNS and the answers are based upon the location of where the DNS is located. So, you may have different answers based on the location of the users. You can have geo DNS combined with the load balancing or HA, so you can play with this kind of software and this is already in production. Obviously, when you have an infrastructure which is stable and distributed, you can play with a lot of things and this here comes the very interesting part. We are partnering with a network provider which has almost around 200 pops around the world in order to deliver a CDN service. Since we provide instances with web servers and we provide object storage, we have a lot of content we make available to a lot of customers. We provide DNS all over Europe, so users can be routed accordingly in Europe. But what if the user, like we live in Milan, so we have a lot of fashion companies, so they are worldwide distributed? What if they have large contents that must be accessed very quickly from all over the world? Here comes the CDN services. You can just put your files inside your instances or you can put them into the object storage, make the container public which is very easy because in June, you can just click a button and have a container and it arrives in June and you can just click a button and you can have it publicly accessible and then you're done. The CDN collects the data and you have made all the contents available and this is especially useful when you have large contents to distribute. Fashion websites have high-res images that take a lot of time to be loaded and so they are taking a lot of advantage of this kind of service. The other one is email service. Once you have a distributed infrastructure, you need to find someone that is finally providing an open-source solution to manage a distributed email system which takes advantage of object storage for long-term archiving of email and attachments but also has a very fast level of caching inside the instances, taking advantage of the IOPS you can get on block storage which on our case happened to be on a provision base. You can choose whatever IOPS tier do you want to use and so you can have a distributed email system and you can have, you can support any possible downtime in a region. You have everything replicated. Once a region or a load balancer or an instance goes down, you just have to get back all the information for the object storage, rebuild the cache or keep them distributed and aligned and you're done. And this is going to come. DNS is already production and CDN animal services are going to come by the end of 2014. The next step is since we face almost monthly a cut on Amazon and Rackspace or Google prices, you have to be smarter to run your infrastructure. And that's where it comes to open compute. We think that hardware is still a very relevant component for the pricing of cloud services. Open compute is approaching the right way, the same open stack way to the problem. Just break it in pieces and rebuild it and that's what we are doing. We are redesigning our hardware architecture in order to be less expensive, more efficient and smarter than the traditional servers. And this also is coming in March 2015. So 15 minutes are a very short time for a demo. I decided just to give you a quick overview of what we do. If you want a demo, just come to the booth E8 which is over there and we will be happy to explain to you how it works and to show you. Thank you very much. Bye-bye.