 So next speaker up is Emma Foley from Intel who will be presenting on barometer, which is an opnfv project Hi folks, I'm going to be presenting on barometer, which is an opnfv project So instead of actually just telling you a lot of information I'm going to answer a lot of questions and at the end you can ask questions as well so first up is What is service assurance and why do we need it? So basically it's become more and more reliant on the internet Data centers have played a bigger and bigger part in our lives and as we move from traditional network deployments of fixed function network appliances to NFE Data centers have become more and more important So as telcos and enterprise do this transition we end up with a lot of tooling a lot of infrastructure that's becoming more and more complicated and Because industries are going to have to meet the meet or exceed the expectations the customer have customers have for service assurance QoS and SLAs I They're going to need Additional tooling additional metrics available to actually monitor their systems for malfunctions and misbehaviors that can cause downtime Unfortunately existing solutions may not actually be enough here because as a tooling gets more complicated You need to be able to monitor not only the platform, but also software applications as well and relay these metrics to management and Analytics engines that will manage your virtualized infrastructure so This is where collectee comes in initially and I know collectee has been around for a very long time However, this is good because It is widely deployed and the industries that are moving Across to NFV. It's the tool that they're likely already using which will help ease the adoption and ease the transition into NFV So a bit about collectee first is it's got a plugin based architecture which makes it really flexible and really configurable and These plugins come in a few different types read plugins actually access the metrics from your system write plugins to relay these metrics up to a higher level analytics engines and Notification plugins which would be equivalent to producing events from your system Logging plugins, which is pretty self-explanatory and also a set of binding plugins. So You're not limited to actually writing these collectee plugins and see you can extend it using Pearl Java or Python if you want to Collectee sounds great. However, there are some gaps This is where Barometer comes in First of all a barometer is an instrument for measuring atmospheric pressure It's also a project in opnfe and for those of you that missed the last session opnfe develops and improves NFV features in upstream ecosystems and also provides Integration testing and installation to produce a reference platform for NFV Which helps industries to adopt it's designed to facilitate the adoption of NFV So barometer is one of these projects and it is concerned with feature development primarily in Collectee to cover the gaps that we've found in that and make it more suitable for NFV deployments We've produced a lot of plugins to help monitor the platform and make this make more data available so not only can you monitor generic compute networking and Storage you can also get more in-depth details from your platform. This is metrics that were already available on Intel platforms what is now exposed through Collectee and also metrics from applications like DB2K and OVS which Will not be relevant in traditional deployments. However, they are Very very relevant as we move towards NFV So once those metrics are available in Collectee, they're pretty much useless unless you can actually talk to your management and orchestration and analytics engines and Interact with components such as open stack Own app Kubernetes and so on so along with the Read plugins we've also produced a bunch of write plugins to talk to OpenStack via Anyaki and Produce or send notifications to OpenStack through A We've demonstrated how you can integrate Wait to collect DC advisor relay all your metrics to Prometheus and actually use that platform data and application data in Kubernetes and produce some plugins for Vez so we can relay the metrics up to own app As well as that we've done some work on sending these metrics via SNMP so that legacy systems can actually use Use the metrics again. This is to help ease the adoption So you don't actually have to change your whole tool chain to use NFE These are supposed to be pretty quick slides more details on our read plugins so DP2K stats These which stats huge pages stats cash monitoring additional memory Again, LibVirt is one here so you can actually monitor your workloads running on virtual machines without installing Collectee on the VMs themselves, which means that you're not interfering with black box Commercial vnfs and you still get the same level of metrics as you would have if you were had more control over your vnfs again, right plugins SNMP gnocchi and Vez and as well as feature development in in collectee barometer is worked on Standardization and making sure that the metrics produced actually are compliant with open source or open Standards for metrics collection so that again if you have other tools, you don't have to spend a lot of time writing normalization or translation plugins that you can you can supplement and Interact and interoperate with different applications We've also provided installer integration so a Vometer and collectee wouldn't be much use in opnfe if you couldn't actually install them So at the moment we have support for fuel compass apex as well as Col-Ansible in in open stack and If you're interested also technically dev stack support During the last cycle there was a lot of work done producing a reference container So if you want to get started with barometer and collectee you can pull down a docker image from the opnfe docker hub and Start using it and this will include all the barometer features that have been upstreamed And we're working on installer support for that reference container so that we'll always have the latest and greatest version of collectee actually on the system just by installing that container and That brings me up to a demo and It's a bit of work in progress at the moment to automate the configuration and deployment of collectee using Ansible So what I'm going to show is installing collectee on for compute nodes from your from your masternode or your controller node and Configure them deploying collectee and then on your masternode aggregating your metrics to that one point and Storing and displaying them So first of all our Ansible script is going to Create collectee configurations on our compute nodes. It's okay. This is a short demo It's about four minutes and it hasn't been sped up. I don't think you can read it anyway So what's happening here is on our masternode were using Ansible to first of all configure collectee on our Compute nodes what it does is for each barometer plug-in it checks if the requirements are mesh and then enables and configures appropriate plugins Okay, so now we're done configuring on four nodes Just going to check that those configurations exist as well as enabling the read plugins This is also configuring the compute nodes to send the metrics back to our masternode Now we're going to actually deploy the container I'll first check that there is actually no container running in case anyone had doubts So that's collectee deployed on four different nodes And It checked that it's running it's our next up we have to Set up storage using influx DB and also we want to set up Grafana so that we can see the metrics that are actually produced in a nice visual dashboard I'm having trouble reading this from here. So the back of the room don't worry So we're using Docker compose to set up those two containers influx and Grafana Not only does it actually deploy Grafana, but it also sets up a load of pre-configured dashboards So you don't have to spend hours going through the metrics that are available and picking what to put on your graphs So I'm gonna add that this hasn't been sped up and we're about two and a half minutes in So as you can see there's a lot of metrics coming in and we can see what's going on on various different nodes We're seeing it is just a compute usage per node and you can get a cumulative Irrigated version or you can see per CPU metrics as well In order to show you there's actually something happening We're just going to stress the CPU so you can see how the metrics do change and how quickly they're collected and updated So we can see that activity that we just kicked off So that was a four-minute demo and how to set up barometer I think that's the first time we've actually shown barometer being deployed Although it's not the first time do you've actually shown barometer in action? What are you new to not and all these demos that have been showcased at OpenStack and opnfe summits Anything to do a metrics collection But doctor with vitrage with OpenStack watcher what they were doing underneath was collecting metrics using those barometer features So you can look at those later. I think the Slides will be put up soon So after that where does barometer go from here during our next release More plugins obviously more plugins. I'm not going to go through them here There's a list on the opnfe wiki the barometer wiki on what's actually planned However, if you have any plugins you want to see enabled or that you're enabling any plugins barometer team is usually happy to help with reviewing pull requests on collectee Going to do some work on collectee cloudification This is to address some issues that are some gaps We saw the start would actually the configurability of collectee and actually deploying it over Multiple nodes Namely that currently if you want to reconfigure collectee you have to restart the service and as your As you might be collecting metrics at a very high frequency Over a lot of nodes this could obviously take a lot of time But also cause a discontinuity in the the metrics so you have gaps in your history, which is not ideal so what we plan to implement is a Bit of an abstraction and API in top of it so that you can configure it on the fly, which is handy in situations where for example At Peak times you may want to collect metrics with much much higher Frequency or if you migrate your workloads and consolidate them into a smaller number of hosts for example to To conserve power you may want to Increase the interval so you're not collecting metrics as often or you may want to enable for certain workloads and certain compute hosts you may want to Change over time the metrics that are actually available So it's part of the Motivation to make it more configurable and more dynamically configurable of course we're always open to collaborations and Would like to see more people consuming barometer and barometer features Basically the goal in the next release is to enable more services to consume data and telemetry for all kinds of use cases Including orchestration management governance and audit and analytics and so on So Does anybody have any questions? Question was what are we using is our time series database for collectee and collectee supports multiple time series databases You could use gnocchi and or you could use influx CB or any other Database that actually supports you don't are not limited to the features that I've outlined here Okay, how many data points are collected per host per second? And that depends on a lot of things collectee has over a hundred plugins available at the moment you're only going to want to Enable a subset of these plugins and each plug-in would have many different metrics available and it would also Monitor or collect metrics on many different resources at the same time. So for example CPU you choose Utilization free interrupts a bunch of other things and that would be per CPU core per host and that's just one plug-in So the frequency of which you collect them really depends on your use case as well But I think we've tested down to sub millisecond How much overhead is that collection and polls? I don't have the answer for that right now But if you follow up I might be able to find out or provide you some tools to find out more questions Did I speak too fast? Okay, if I want to remember can I run out on hosts and containers and VMs? So on yes, you can as long as there's network connectivity between them. You can relay the metrics for many hosts that's running collectee to a designated collectee server via its network plug-in No more questions. I will turn on the light again so we can see everybody Thank you very much. So if there are more questions wiki mailing list for the recording. Thank you very much You