 Hello. It's really great to be here. This talk is titled Beyond Horizon, and we're going to show you a cool new way to manage your OpenStack installations along with the rest of your infrastructure. So first, a few words about us. I'm Dimitris, and this is Gris. We work on Mist.io, which is software as a service and an open source project. We run an office in Athens, Gris, where we do most of the product development, and another one in San Francisco. We thought further delay. Let's look at the agenda. We're going to take a brief look on today's computing landscape, on the ways we do computing, the pain points we've identified, and our story in coming up with Mist.io. We'll take a look at the major functionality. We'll look under the hood and go for a bold live demo. Then we'll dive into some more advanced use cases. The one has to do with the US midterm elections that took place yesterday, and the other one is about SDN and NFV testing. Then we'll tell you about our next steps, and we'll make sure to save some time for questions. So today's computing landscape looks a bit like this. We have a growing bunch of public clouds with different advantages and different pricing policies. Then we have private clouds, where OpenStuck is gaining more and more mindset. The use of containers is on their eyes since Docker, and other technologies like QoROS are building on this trend. Then we still have those old school bare metal boxes that make a lot of noise and consume power. Maybe they're part of a legacy application, or they're nodes in your OpenStuck private clouds. All those platforms have pretty much common needs in terms of management. You need to provision new machines. You need to deploy complex applications that consist of several different machines. You need to monitor the applications themselves, the systems, and also the environment. You need to be notified when those metrics go out of the norm, and in some cases you want to automate some actions when that happens. So all those different technologies have pretty much common management needs, but each one comes with its own set of tools. So all this power comes with a cost, and we have to use our set of golden handcuffs. You either go with a single provider and use their tools, but then you need to be sure that they will stay competitive in terms of provided features and pricing, or maybe this is not an option, and you do have to combine different technologies in hybrid setups, and you do have to use different tools, but then complexity is growing and becomes its own constraint. So, yeah, in the OpenStack World, the main management tool is Horizon. It provides a dashboard for managing your machines, for provisioning new machines, for provisioning networks, configuring security, and more. So why would we need something else? Well, Horizon doesn't help at all in terms of monitoring. It doesn't send alerts. It doesn't do automation. You just get a list of your running instances, and even that list is not up-to-date. You need to hit that refresh button, because it doesn't use any polling technologies or web sockets. It does provide a VNC console, but I would argue that this is not the best way to control your servers, and it doesn't always work. Then it's limited to OpenStack deployments, and you need a different version of Horizon for its OpenStack installation. I don't know if you've tried using Horizon through a touchscreen, or if you have to perform some management while on the go. You're probably out of luck. It doesn't work so well. These were the problems that we were facing, me and my co-founders. We were working together since 2009. We were running a software consultancy, developing systems for different clients across the world, and we had to manage them and respond to an incident. We started scratching our own needs, building a tool that we needed, a unified dashboard for the different platforms, and we made sure that it was mobile-friendly to allow us to work from anywhere we were. We quickly realized that this was a rather common need. We had an initial open-source implementation, so we applied to Mozilla's extra-librator program, Web Forward. We got accepted, and in early 2013, we traveled to California. We followed through the program, learned a lot, built our network, raised our first money, and we've attracted some world-class advisors. We recruited the Kikas team, and we did all that in order to develop an ocean solution, Miss Tio, which provides a unified dashboard, supporting the most popular public and private clouds. It will monitor your machines at all times, measuring the system metrics, application metrics, and any custom metric you would like. You can configure events that trigger either alerts or automated actions. For example, if some process is leaking memory, you can ask Miss Tio to automatically restart it whenever the usage is over some threshold. When you need to intervene manually, you will receive an alert and you can tap on that alert and you can have an interactive SSH console, a command cell, to address your problems from anywhere you are. But since most of the time you won't be working from your smartphone, we've also built a restful API and a set of command line tools and some Python bindings. We're working on bindings on more languages. So, for example, let's say you want to provision a new server and assign it to a new network. This is how you would do it. Just import Miss Client, instantiate it with your credentials, select the backend. This one is named Juno. Here's how I would create a private network. And this is how we would create the machine itself. I'll call it dev1, assign it with the dev tag, and also assign it to both the public and the private networks. And now I can control my machines. I can send batch commands. This would update and upgrade all my dev servers, all the servers that have the dev tag. And this is also possible through the command line using the mist command, just like that. You can perform more advanced operations. This would configure automation, automation animation before, restarting a service that's looking memory when the usage is over 95%. And we also have some manageable integration. So, this command would run this playbook on all my line node machines. So, before we go for a live demo, let's look under the hood. MissTio is a Python web application. It's built on Pyramid and it's served through Uwisgi. The user interacts either through an HTML5 app through the browser or through the command line tools that I mentioned before or through a native Android application that's about to be released. Both the native Android app and the HTML5 app, they stay up to date using web sockets, while the command line tools use the restful API. The server will query your cloud backends using their native APIs through Libcloud, a library that provides a common abstraction layer. MissTio can also connect to your machines using SSH if you provide a key in order to give you the SSH console, the command cell, and also to install CollectD. CollectD is an open source monitoring agent. It's really lightweight. We install it in order to provide monitoring. It will collect samples of all the system metrics and any custom metrics you configured. Every five seconds, it will send these data points to our monitor servers. There we do some pre-processing for consistency and store everything in a graphite cluster. We've also built an alerting module. When the incoming data points satisfy your rules, it will either send you an alert or trigger some automated action. And without further delay, let's pass on to Chris for the live demo. Okay then. We land here and this is the MissTio homepage with a lot of information from here. So let's start from the top. Everything for MissTio, every cloud provider is a backend. So here I have added an EC2 backend, our own OpenStack Juno installation, a Google compute engine, even a Docker host that we treat as a Docker backend and HP Helion Cloud. And those two OpenStack F0 and OpenStack FE are bare-metal servers that can be added as backends. Those two host our own two different OpenStack installations. Moving further down, I can see that I can have a look at all my montored and running machines. We will see that in detail later. I can look at all my available OS images. I can click at one and provision a machine with that. I can see all my networks. We'll talk about networks later and all my SSH keys. And at the bottom, there is a stacked graph for all my montored machines and here I can have some quick overview and it seems that Berlin I think is handling some peculiar weight. We'll see later what we can do about that. So to add another backend, we have another OpenStack installation. We provide the support for Azure bare-metal servers or any single server for that matter. EC2, Nefoscale, Digital Ocean, Line node, Softlayer, Rackspace and I don't think I have forgotten something. So this would be our username, a super secret password of URL and the usual admin tenant name. I will add add and OpenStack admin is added. I don't like this name, so I will rename it to Icehouse because it is an Icehouse installation after all. And now I can go to the machines. These are all of my running instances. I would like to create one, we have Athens, Paris, let's create an Atlanta one. I will select the newly created OpenStack Icehouse, Effedora image, a tiny size for that matter. Now for the keys, I have added some SSH keys for this demo we'll auto-generate one, let's call it dummy. Well, Mr. Yo uses keys for one purpose, for two purposes actually. The first one is to provide a cell access, we will see that later, and the second one is to install collectd to the machine. In case you don't want to add an SSH key, you can manually install collectd and configure it to send all the metrics to Mr. Yo. I can run any script I want and after the machine is provisioned, then this script will be run and I can enable monitoring, but those two, we will launch the machine, it is in Atlanta, it's in pending state, and while we're waiting for the SSH server to come up, let's take a look at Paris. We wait for it to fetch the server stats, great. This time we'll start from the bottom and what I can see here is some basic info, the uptime, the last probe and some extra info. Those are info that the API can send us to maybe it's useful. So, this is a monitor machine, I have metrics for RAM, CPU users, etc, etc and I can add another metric. Each collectd instance sends us all the available metrics and we can monitor every one of them. So for example, if I would like to know the pink, like in, I have added that to the machine and my very cool stuff is that I can add custom metrics. Custom metrics are just simple Python lines of code that return a value, whatever value that is and we can monitor literally everything. For example, in this case I have added a custom Python plugin which is that. This is the temperature of Paris and from that I can begin doing more cool stuff. So, I would like to add a rule and say that if the temperature it will fix drops below 0 then for any value, then alert me because I don't know, I have to change clothes. Other thing I can do is tag the machine. For example, I would like to add the demo tag to this machine because we will need that later to handle all the machines. We said that this one is a bare metal server. We have these houses, our OpenStack IceHouse installation and we have configured some cilometer plugins, the total vCPUs used, the instances, we have created an instance so we see here that from 6 it got to 7 and we can do exactly the same, add rules, etc. For example, if instances goes below 6 then alert me I think, I don't have ThunderBird up and running, it will alert me. Let's go back to the Atlanta machine that we just created, here it is and I think it is ready. So here I can have access, cell access, it is a fully featured cell, so I am here and I can use all the bus commands for example, if I can do it right I can install htop because everyone loves htop it will just take one minute I think and we are about to get done ok then, and here it is htop, the cool thing with that is that's the way it looks in your cell phone or your tablet or any device. So let's go back and talk more about some networks. Here are all my available networks, two for the OpenStack and one for the OpenStack Icehouse installation. I would like to create another one I will call that ParisNet and create a subnet the Paris sub with a dummy network address and a dummy gateway I will enable the DHCP server and add some allocation pools if there is no typo I think we will have our new network in about a few seconds here and now when I want to provision another machine, this network will be available in this network's tab the cool thing with Mista use that alongside with our graphical user interface that we have a RESTful API that helps us build tools around it so for example we have the MIST command line tool and I can do almost everything from the command line so for example I can list all of my backends or I can list all of my machines for the backend let's say OpenStack Juno this will take just one minute ok another cool thing is that everything that happens in Mista it happens and is being refreshed so for example if I wanted to add another key I would name it ParisDemo and I would ask Mistaio to auto generate one ParisDemo ParisDemo ok then demo 2 and it worked and finally I can do something like this I will run command that's something in all the machines that are tagged demo this takes some time and what's happening under the hood the meter has mentioned that before we have integrated Ansible and Mistaio uses all the available information from the machines, the IP the user, the SSH keys etc etc it produces Ansible host file and runs Ansible through Ansible commands or whatever other Ansible module for example if I go now to the Paris machine and we will ok I think some patience is in use and something is there so let's go back to the presentation all right this is all really cool and it's pretty much what we had in mind when we started building Mistaio but what really amazed us is the different and unexpected ways our users are putting our tools in use so I will mention two of those use cases yesterday it was the congressional elections in the US and PCCC the progressive change campaign committee is a political organization co-founded by Aaron Schwartz and they have more than a million members we are really proud of the role that Mistaio played maybe it was the first time that cloud management tool was used to promote net neutrality so PCCC they developed a tool called PICE which they provide to selected candidates and one of the things that it does is that it runs polling campaigns over the phone so candidates can launch new campaigns and when they do PICE uses Mistaio to spin up VMs either online or on digital ocean deploys the polling code and then the poll begins during that time the PCCC staff can monitor the process using system metrics while the candidates can monitor again their polling process using custom metrics provided by Mistaio like the number of pending calls and the number of remaining calls is zero Mistaio triggers the cleanup process by sending all the polling data to unnecessary bucket and destroying the machine what's really cool about that is that this whole system was implemented within a couple weeks thanks to this automation functionality and at the very last moment they decided to switch cloud providers and this was really easy just changed just a line of code had to be changed another prominent customer of ours is spire and communications they are a company founded in 1936 their business is testing telecommunication networks so their clients are telcos and their latest product is SDN automated SDN and NFV testing SDN is software defined networking and NFV stands for network functions virtualization so as more telcos are looking into a future where more of their functions are virtualized their network functionality is virtualized they want to be sure that they don't sacrifice anything in terms of reliability performance and security so the purpose of these testing procedures is to perform functional and performance tests and to probe for security vulnerabilities spire and does all that through open stack installations and they want to be able to test using different distributions of open stack and integrate all that in their in house process their goal is speed and ease of use of test lab setup to minimize cost by using commodity hardware and to have a repeatable process testing procedure so we help them by providing on demand installations of open stack on top of cloud and bare metal servers provided by Nefoscale cloud provider in California Nefoscale has an API not only for provisioning cloud VMs but also for provisioning bare metal machines and we use both the Nefoscale API and the OpenStack API no for provisioning and also for the network configuration including some more sophisticated setups like L2 and L3 networks pertinent and VxLands once the machines the open stack deployments are up and running Mr. Io takes care of monitoring the host nodes, the bare metal servers the guest VMs and also the testing process to emit actionable alerts and also to trigger autoscaling and autoscaling not only on the application level by spinning up new VMs inside OpenStack that join the network functions but also on the OpenStack level by provisioning new bare metals that become compute or network nodes if you'd like to know more about that we'd love to talk to you later this is pretty much what we had to show you the goal of Mr. Io is to set you free from vendor and platform lock-in it can monitor any machine that you have bare metals, VMs and containers it can emit actionable alerts you can act on them from anywhere you are from any device it's trivial to configure automation in the form of executing cell commands and rebooting servers, upscaling and downscaling so that's all from us the next steps that we're working on we're doing some improvements of the user experience like adding custom graphs with multiple metrics custom dashboards we're looking into reporting usage and cost analysis and also reporting on the cloud providers themselves to tell you if they're keeping their promises in terms of SLA compliance in terms of availability and performance we're about to release a native android application and we're working on an iOS one and we're adding support for multiple users with granular access permissions and an audit trail of every action we believe this will be useful not only to large organizations using Mist.io with different users with different roles but also for people that want help solving DevOps problems so they can share access limited access with a friend or colleague and have a complete log of all the actions that were performed on their servers so we believe this way Mist.io will become a social DevOps platform in the same way that GitHub is a social coding platform that's all for us we'd love to hear any questions you may have so in order to add new clouds add new backends exactly if you're adding for example Azure you need to upload a certificate file with EC2 we yes so in the hosted solution we keep them in our database we take security very seriously because as you understand bridge would be catastrophic for us and our clients there is also a self you can also host it yourself if you don't want to hand out your credentials you can have a non-premise installation of Mist.io and the same applies to the SSH keys and your credentials yeah this is our next steps I mentioned reporting in terms of performance you can do that yourself if you have a script you can spin up servers and run the same script in different servers with similar configuration and measure but yeah we will be doing that automatically later on any other questions thank you very much we would love to hear any battle stories you may have in terms of automation and especially any desires, the unfulfilled desires and see if we can help you with that please come talk to us or send us an email at infoatmistaio thank you