 All right, then I'm going to get started now and I really am pleased to have today for today's OpenShift Commons meeting briefing. Delano Seymour from Sixth Fusion, who's going to talk about metering microservices on OpenShift, and I saw this demo probably three weeks ago and got really excited. The idea of being able to meter and do the billing and charge back and figure out everything that's going on on OpenShift 3 and deal with all the containers that are being scaled up and scaled down has been an interesting thought process for me over the past few months while we've been moving towards OpenShift 3's release in the upcoming months and how we're going to do this. And then when Delano showed me this wonderful demo, I was like, okay, we have to get this on and up in front of the Commons folks and get everybody's feedback on it. They're doing this in part and parcel to help one of their product offerings, which Delano I'm sure will explain what all that is and what Sixth Fusion does. So without any further ado, I'm going to let Delano explain who Sixth Fusion is, who he is, and why metering microservices on OpenShift is actually doable and pretty cool to see. So I'll let you take it away, Delano. Great. Thank you, Diane. How's everyone today? I'm going to go through some brief information and then a quick demo afterwards, and then we'll have some questions and answers so that you can ask questions. My topic of discussion today is metering microservices on OpenShift. My name is Delano Seymour. I'm a CTO of Sixth Fusion. I have some Twitter and email there if you want that. But Sixth Fusion's goal is to create a method in which to meter infrastructure, like we meter electricity, being able to create a true utility in the cloud space. And so what I'm going to do now is just talk about some of the advancements that have happened in the cloud space, specifically around containers and how we've done some adjusting to take advantage of containers. So the first thing I want to show is that with containers, it creates some great benefits for our customers. For software earners, containers provide a consistent way to package and deploy code. Containers provide an efficient way to distribute software. Containers improve the portability of software between platforms. For infrastructure earners, containers provide a standard way to deploy software. Containers provide a standard interface to orchestration and running software. Containers also remove the need to have deep knowledge of the software to deploy and manage it. So it creates a way for infrastructure earners to just basically take that container and just run it. So on the technology side, OpenShift has got that part covered. They allow us to provide a platform to deploy containers that you create or containers that are created by others. They also provide the ability to deploy directly from code using their SDI capabilities and provide a simple way to run containers at scale. So OpenShift becomes a great home for microservices. Well, what are microservices? Microservices are small, lightweight services that can be combined to deliver a single application. In the old days, we had a monolithic application or we would break down application into multiple libraries. But with microservices, we can break down that application to multiple services. Each service independently operating from the other and communicating over a lightweight interface like HTTP. This microservices style architecture fits very well with OpenShift and the OpenShift platform and allows you to then put individual services that can be scaled automatically and independently of each other. Then another question comes up. We've got the technology side sorted out. We had the ability to create many microservices and split our application up into groups. But what about the financial side? What about how we want to share our software with others if it's a commercial package? Well, how do you track that usage? How do you charge for that usage? And then the big question is, can you use that same method regardless of where that microservice is running? Because as we all know, microservices can run anywhere. They could be internal to your enterprise or they could be on a public cloud. And OpenShift provides both an enterprise product and a cloud product. So you want to be able to get that holistic view. So what the solution is, 6Fusion believes the solution is that you can meter it like a true utility where you can track resources used by all microservices that make up your application. You can provide a single metric to make that financial conversation easy, remove the friction and allowing users to understand their consumption intuitively. It also provides a method to baseline and train your consumption over time. Now an analogy so that we can all wrap our minds around what we're thinking is electricity. With electricity, you will purchase a refrigerator, a air conditioner, a laptop or cell phone. And all of those devices, all you have to do is plug it into the utility grid. Once you plug it in, it gets metered. And then they use that consumption data to either bill you or to recommend any other changes that you might need. We want to do the same thing that the utility grid has done for electricity. We want to do the same thing for compute. So real quick, how does it work? Step one, we collect resource usage for compute, network and storage, but each container that we find on a particular platform. Then we take six metrics and we'll go through this one at a time. One is CPU, which we measure in megahertz. The second is memory, which we measure in megabytes. The third is storage, which we measure in gigabytes. And then there are three IOs. There's disk IO, which is the amount of bytes transferred from disk to memory. And that is measured in kilobytes per second. And then we have two network IO metrics. One LAN IO, which is the network traffic or the bytes transferred between machines within the same network versus WAN IO, which is bytes transferred or received across networks. We then take those six metrics, hence the name, I guess, six fusion, and fuse those metrics into one universal metric. We call it the workload allocations view. So when a particular system is measured, what you see is how much workload allocation cubes are being consumed at any given time interval. And we can measure that just like the kilowatt, it can have different levels. CPU, it could be kilowatt, kilowatts. It could be megawatts, et cetera, et cetera. So let's go to step three. In step three, we would then collect the state of the current inventory. In the case of OpenShift, we will collect the services that are running, the pods that are running, and also the list of containers that are running. And we will take that data in step four along with the meter data and submit that to our UC6 platform. That allows us to store those metrics. That allows us to do analysis on those metrics. And then in step five, we can provide consumption dashboards and reports. We can also use that for benchmarking, baselining, and other analytical processes. So what I'm going to do real quick is just show you how we're doing this with OpenShift and containers. I'm just going to switch over my screen. So what you're looking at, I've created a version where we can see the usage in real time. And what you're looking at is a list of pods that the system has interrogated. We have an Apache pod and we have a Docker registry pod. It's a total of four containers across the two pods. And in this scenario, we're looking at the aggregate usage of a particular pod over time. And you can see that this pod is consuming about one milliwatt of usage per hour versus another pod, which is using three milliwatt of usage per hour. It makes it very simple to look at your consumption and communicate with the financial managers of your infrastructure how much is being consumed. And then when you combine the usage of all of your pods in wax with a price per wax then you can calculate very simply how much it will cost to run that particular application over time. You can also use that for planning in the future. You can use that for optimization as well. I mean, one thing that technologists want to do is make efficient applications. So being able to look at the usage of a particular application and how it's using the resources of that infrastructure and being able to tweak your code or your settings or any type of configurations will allow you to then optimize and make more efficient that application or another particular infrastructure, thereby reducing the cost of that application. Some other use cases could be for ISVs. ISVs have softwares. A lot of software is open source, but there's also software that's not open source. And for the software that's not open source, this provides an excellent way where you can practice consumption and usage of your applications and your containers as a whole for chargeback purposes, for billing purposes or just to report the consumption over time. So with our technology today, we actually meter virtual machines from the hypervisor level. We can meter individual operating systems using agents. And now with containers, we want to take that to the next level to sort of move up the stack, being able to get closer and closer to the process in which that application is running. And if we can get all three levels combined in one view, then you can get a holistic view of what your applications are consuming and how much of those applications are actually costing you to run. Some other advantages are things like being able to benchmark how much would this application cost me to run on OpenShift in my private infrastructure versus if I had to take this and put it out in the public infrastructure. And with the use of the consumption metrics with the workload allocation view, we can do some measurements on both infrastructures and give you an idea of how much it will cost in either location. So that's as far as I really have for today. I wanted to sort of open the discussion up for Q&A. I'm sure we're going to have a lot of questions. And Diane, if you can do that for me, I appreciate it. We have one already. And I think this is probably where we start to get into the deeper dive on this. And John Osborn has asked, how do you collect this data? Is it an agent-based solution where you install the agents in each container that reports back to a service? So maybe if you could tell us a little bit about how you architected this solution and where it's working and at what level it's working inside of OpenShift. Okay, so what we've done here is created Docker containers that will basically be run inside of OpenShift. They will then have to be given access through the privileged parameter. And you have to mount the Docker socket, and it will then be able to pull the metrics directly from OpenShift. This will have to be done as an administrator, for instance, of your OpenShift installation in the enterprise. So this is really something that the OpenShift administrator would have to install and manage for the deployed applications on that installation of OpenShift. That's correct. And it will pull all pods, all services, and it will map that back to your namespaces so that you can then reconcile or report on usage. We also have APIs that allow systems to be able to pull that information directly. So once it gets collected, aggregated, and all the analytics are completed, you can actually get out APIs and create queries that say, give me all my usage for the month, for the year, for the day, for the court. So John just asked, so does it run in each container or is it just monitoring each container? It's just monitoring each container. It runs outside of the container. In our world, we try to meter from the outside so that we don't affect the usage of the container itself. So this is definitely outside of the container. So John, I think I have the mic off on you. Did that answer your question? If you have your microphone turned on. Yes, thank you. Okay, so Michael Virgil is also asking a question. He says, the WAC aggregates six underlying utilization metrics. What are the numbers of each of these that equal a single WAC? So the way we do it, we create a contribution module. So we have what we call compute types which define a sort of ratio of those numbers between CPU to memory to storage disk. And so at this point, we haven't actually published those numbers, but we can definitely have a conversation and talk about that in more detail. So that I think that that answers Michael's question. So if there's any other questions at all, those are the two key ones that came through today. That's one of the things that I'm interested in getting people's feedback. And I wanted to get you in front of this audience and as well as everybody who's watching it as a recorded one, because I think this is where we can get feedback from the community, the commons group on whether or not this, I personally think this is hugely useful to anyone who's hosting or operating, whether on-premise, behind an enterprise or doing the hybrid model you described. And so I think this is kind of one of those things that everybody's going to end up either having to build themselves or we're going to have to open ship engineering is going to have to build it or we can get you to do this for us, which I think is what's awesome is that you've stepped up and done it. So I'm really pleased with what you've done and I'm wondering about other folks, if they could email the mailing list or Six Fusion or me and regular folks on the IRC channel and just talk about what else they need, what it's missing. So I've got another question that's just come in from Nicholas Schutz. What sort of performance impact and resource consumption does this have on OSCE as a host? Yes, so the performance that it pushes on the host is relatively low. It does introduce some performance because it has to collect those metrics. That could be a fine-tuned though because what I'm showing in this demonstration is that I'm collecting the metrics every single second, but that could be fine-tuned where we don't have to collect it every second. We could probably collect it once every five minutes or different into fine-tuned it. One other thing I want to point out is that we're trying to get feedback that we can sort of fine-tune this product so that it can be something that customers can find useful and performance is one of those things that we're looking to fine-tune. So I know that you have a visualization tool for WAC or other offerings because I've seen your demo before but Michael Virgil is asking what reporting visualization is available. So maybe if you can explain a little bit about Six Fusion itself and what you do, that might be helpful and how this terminal view and the API that you can call to do this is a wonderful thing for us but maybe a little bit more about your user experience and visualization stuff. Yes, so we do have a console that allows our customers to log into our platform and be able to bring up reporting. So I'm going to just switch here. And I did not stage these questions there. They're all coming in live, so perfect. So I'm going to just go ahead and show a particular account that I have here. And in this account, you'll be able to see a list of machines. So we have this concept of machines and infrastructure. So an infrastructure is a collection of resources that can run particular work needs. In this case, I have a bunch of different resources. I'm just going to go ahead and pick any pick of a particular one. I'll try to go back in time because this has some old data in it. But I'll be able to select a different account if that one doesn't have what I want in it. This is why live data is a wonderful thing, live demos. So we do have this concept of organizations, and I'm just going to switch to demo organization. So in this demo organization, I have a few different infrastructures. So we have a meter for every infrastructure, and we have different types of meters. So if you look in this list, I have a meter for AWS, VMware, Zen, and then the OpenShift meter is the one that we're working on right now to integrate into our console. And for this particular infrastructure, this VMware infrastructure, so it's a VMware-based infrastructure behind this, I can see that I have some consumption, some minimum, maximum, average consumption, and my total consumption. I also give you the average usage of the individual six resources, CPU, average memory, average storage, et cetera. And you can also see a chart that shows the six metrics alongside each other. I can go ahead and turn off any one of these six metrics to just show one. So in this case, I'm showing my consumption over time, and I can move, of course, to see that consumption. And I can then compare that consumption to maybe a CPU. There's a correlation of my consumption going up and down based on my CPU, the variability in my CPU. We can also do things like get a list of machines. So this particular infrastructure has a bunch of machines that are started. There are two that are started. There are 15 that actually are stopped, and there's one that's deleted the total of 18 in this particular environment. And it gives you how much wax across those categories. We also have a more granular report where you can pick a particular machine out of that infrastructure. I'm going to go ahead and click this filter here, where I'm going to select that infrastructure we were looking at by filtering that infrastructure. And then I'm going to pick one of these machines. So I'm going to pick this prod after one. Actually, I'm going to pick the firewall since it's actually started. And I could do one or more machines. So in this case, I'm going to pick just one machine, and I will go ahead and show the report by default. It picks the last month of usage. And it shows a similar report. So as this rolls on, I can then get again that chart where we can see all of the different metrics at any given time. You can see that we had some drop-offs in the middle here. It shows me my total usage in kilo-wax. It shows my average maximum with all the six metrics. And I can go ahead and drill down into the details and look at this at an hourly point of view. So every single hour, what was the consumption? And I could scale this back in time to any hour at any time. Can I stop you? Can I get you to click on the Reports tab? Somebody's asking to take a peek at that. So let's click on the Reports tab. On the Reports tab, we have a few reports that you can access. One is the machine report, which you actually saw already. So I'm not going to make sense bringing that one up. But we also have a tagging report, a utilization report, and a chargeback report. And then we have an Amazon price comparison report. That one, I should probably go into another conversation for that one, but we do have that report. In the utilization report, for instance, I could pick that particular infrastructure. And I can compare, I can look at how that infrastructure is being utilized from a financial point of view. So my target cost in this particular infrastructure is $41.20. But my actual cost is $148.00. Why? Because I'm not really using all of the resources that I actually have. We can also look down and see what the consumption looks like over that same period of time for that infrastructure. The actual versus target consumption. So I got another quick question here. Is this reporting interface available for on-premise installations, or is it just a SaaS offering from Susan at the moment? So at the moment, it's a SaaS offering. We've been entertaining the concept of making this an on-premise offering. We have a couple of customers that have access for that, and we've been discussing it in our dev meetings about making it on-premise. It shouldn't be that difficult, actually, to make this on-premise, but today it is a SaaS only offer. Well, that looks like all of the questions that we had. If you want to throw up your last slide with your notes in your email address again, that would probably be great. And I think that you've really inspired a lot of conversations here. This is going to be a wonderful resource for people who are deploying OpenShift on lots of different infrastructures, as you're showcasing that you can show off Amazon and VMware. Once you've got the containerization added in. A quick question, though, you're doing machine... Nicholas, the quick question is, is there a solution for V2? At the moment, no. Everything right now is really focused on V3 and the next release. So this is really a forward-viewing opportunity for metering. So I think partially why I'm so excited about it. The other thing is on your utilization of reports, this is by machine, is the thought that when we go to dockers and containers, that you'll be able to meter and show which applications are consuming or which can break it down even further than just the machine level? Yes. That's exactly why we love OpenShift and the docker and container world. Because with machines, the best we could do is hit it at the machine level. So all of the applications and processes running inside that machine are all aggregated, and therefore you just see one aggregated view of that machine. With containers, we can move up the stack. We can get inside the machine. At this point, now, we can look at all the containers that make up a particular application, and we can then report on the application itself. What we want to add to this is another tab or another area where you can look at your namespace or your project, and you can look at your project as a whole, and then you can drill down into your project and look at your services. And then for every service, there's a pod or a group of pods behind that service. So then we want to be able to drill down into that group of pods, and then eventually drill right down into which containers inside that pod are actually driving consumption. So we want to better give customers the full view of their users. So this to me is a necessity and totally exciting that you guys are working on this, and this is what Commons really is about, is we don't have to invent everything on the OpenShift community side or by Red Hat engineers, and this is a wonderful example, and I'm hoping that everyone who's listening to this call now and others folks will help you with some feedback on what they're looking for in terms of reporting and charge back and the detail level. So if you have questions or further feedback, please post it to the Commons mailing list or send an email directly to Delano. If you want to throw up your slide now with your email address on it again, that would be a good way to end this conversation. And we'll get you the feedback you're looking for and move this forward as quickly as possible because the general availability of OpenShift 3 is coming very soon to the world and we're hoping that we can do a lot of work with you guys. So thank you very much Delano for taking the time today. This is really cool stuff, and I think everybody who's in the chat now realizes that too. So thanks again and thanks again to Six Fusion for making this possible today. Yes, thank you Diane. Alright, take care everyone.