 All right. Can you guys hear me? All right. Thanks for stopping by, and I think we have more people coming in. So today we are going to talk about cloud operations and security, and just a little bit of introduction about WAN clouds. And I will also give you a little bit of a demo of our platform. So we specifically focus on the day two operations when it comes to OpenStack. So our goal is to improve the availability and visibility of your OpenStack deployment through a platform that is simple, scalable, and flexible. We started a couple of years ago. We provide both services, and also we have been working on our platform that I will share with you in a little bit. We're based in Santa Clara, and we have global offices as well. So the problems that we have been trying to solve is that once your OpenStack is deployed, somebody is going to be stuck with it on a 24 by 7 basis as day two operations. Problems will occur. OpenStack is not one of those things where you just deploy it and forget about it. People are going to be touching it on an ongoing basis. And as they do, problems will occur. So the goal here is that how do you quickly resolve the issues? How do you set up proactive checks? How do you make sure that the services that you have that are enabled through your OpenStack are they working as expected? Currently, basically, the biggest problem is that a lot of these things are done manually. And obviously, as you do things manually, problems will occur. And more importantly, it costs a lot of money, too. So it affects your overall availability because things take a long time, and they're done through manual effort. So what we believe is that is it really necessary to every time find a needle when a problem occurs to find a needle in a hair stack? Can we give you a magnet instead of basically every single time go through finding that magnet? So there are three things that we attempted to solve that I will share in the demo. One of the areas is log analysis. So when a problem occurs, you can collect a log which could be for a single component like Nova or Cinder or Neutron. And we have been able to build a custom algorithm to give you the answer literally in 30 seconds to a couple of minutes. Things that takes three hours or four hours today and manually going through those logs will do it in 30 seconds. The other area for us is services assurance, which is very important because you want to make sure that the services that you have enabled are running as you are expecting them to run. The other area is setting up proactive checks and rules. Because OpenStack is something that people are going to be touching every single day, if not every single day at least a couple of times a week, you want to make sure that nobody goes and changes something that causes a problem. And then you're trying to basically troubleshoot and discover what and who changed which configuration parameter. So as I mentioned, three areas that we are looking at, one is basically the analytics part. And that's where we say why every single time look for a needle in a hastig, why we will give you a magnet. The second thing is from our perspective when it comes to assurance, when something goes wrong, can you give you an ability to set up these assurance tests before it becomes a crisis? Can I give you the tools to discover if something goes wrong that you have a warning? The other area is that it's your cloud. You set your rules what's important to you. And every few seconds, we are going to track those rules. So if anybody goes and mucks around with your configuration, we're going to report it and get you an alert. So what we believe is that with this process, we will so far what we have looked at is getting a time reduced from hours to literally seconds and minutes. We will give you 95% plus productivity gain of what you are dealing with with your OpenStack deployment issues today. And that is through analytics it takes. As I mentioned earlier, when problems occur, on average, you spend an hour or two to troubleshoot. If I can get you to a quick diagnosis literally in seconds in a minute, that gives you a huge productivity gain. And more importantly, it improves the availability of the deployment or whether it's in your development testing environment or in pre-production or production. We give you a lot more insight as to what probably caused the issue. The second thing is the assurance. You set it up once. You don't have to have resources locked every single day trying to basically run scripts. We have automated that process. And the other area is compliance where you set your rules once against your deployment. And we will go and track those rules on an ongoing basis. So in every area, the overall idea is to give you increased visibility and predictability about your OpenStack deployment. So we specifically, again, are focusing on the day two operation. Now, what versions do we support? We support all the community versions. As far as the log analysis is concerned, we are working on Mitaka. But from Icehouse all the way to Liberty, we support them. We have also, you can deploy your OpenStack through, for example, Miranta's Fuel, or you can have Red Hat distribution or any other distributions out there. We will go ahead and support it. This platform is SaaS-based. So you can go ahead and try it out today. And this is, we have defined packages for it. And you can on chiab.net or whenclouds.net. The other area, obviously, we also provide services in addition to the platform that we have around OpenStack containers and dockers, as well as Cloud Services. So I'm going to actually switch to give you a little bit of how the platform actually looks. So give me a second. OK. So this is how our platform looks. What you see here is in this case, let's say there is a problem that you experienced. Through log analysis, we give you shorten your time to basically get to the diagnosis of the issue. In this case, in this log file, you see three components we detected, Cinder, Neutron, and Nova. And if you look at it the way we are actually tracking the failures, obviously in this summary, things that are successful you are not obviously interested in, things that are not successful, like the API calls that didn't go successful or core service errors is what we are going to look at. So for example, if you look at one of this call, which was a launch instance, things started to go fine. And what we show you here is at some stage, we were expecting this statement in your log and in this sequence, and we didn't see it. And we list the core service error alongside that as well. So this process, essentially, I'm going to try to actually see if I can upload a file here. It's this quick. These are the versions that we are supporting that we have looked at, including Red Hat, including all the community versions. As I mentioned, you can have a private distribution through Red Hat or others, Merantis, et cetera. We have tested it against all these community-based distributions, and it works just fine. In this case, I've just taken a file where there was an issue, and I'm going to try to analyze it. And I just want to share it with you that how quick, instead of you going through thousands of lines of log files, I can quickly diagnose the issue and give you, basically, the analysis that you're looking for. So the same file I was showing earlier, which gave me the analysis, and I can quickly now go to the root cause analysis of the issue. The other areas that we have is, from integration perspective, you can add your deployments by just providing, for example, Alt URL, a username and password. And you can give it whatever name you want to give it, and it takes literally two, three seconds to add your deployment. However fast you can type. We also have done integration with LogStage. So one is you can actually upload your log file, or we can have you integrate your LogStage server, and we will be able to pull log files from there as well. The third tab that you see here is you can actually share your deployment with other colleagues or your partner as well. And in this case, if I have, for example, in this case, I have four deployments. I can share one of the deployment with my colleague, and this way I'm able to collaborate with him on service tests and other things. So in the case of assurance, basically what we provide is this we actually built on top of, really, to make this very consumable. We will extract all the components that you're running in your OpenStag deployment. And as you look at the various components, you can enable, for example, whatever tests that you feel are important to you. For example, just for test purposes, I've enabled these tests. And you can configure them to your parameters. You can set an SLA here. So for example, SLA could be that I want 100% of the time this should pass, or it could be 95% to 98%. The other thing that what we have done is you can tweak the values here. You can set your benchmark. And as you are done with it, you can either run the test on demand or you can actually schedule it. So schedule means that twice a day, right now the way we have configured it is twice a day, it will go out and run these tests. So if there is a problem, you will get an alert through email. And this way, in case that issue, before your customer reports or before your user report, that issue and it becomes a crisis, we make sure that in case there is a failure, we give you, like in this case, I can give you full details of the failure. And you will see the last five failures and what actually happened. So as I mentioned, the other advantage here is that you set your profile once. Your infrastructure or your things will actually change. You can go add more functionality to your deployment. You can enable more components. And as you do that, the profile that we have set, the tests that you have set, always remains centralized. So you don't need to go change any scripts here. This is done once. And the whole idea here is that I can actually save you time and the DevOps person or one or two people that you have assigned to this, all of a sudden we can actually free them up. So the whole idea here is that from our perspective, we want to take the manual efforts out of the day-to-day things and give you tremendous productivity gain by doing this automatic. Do it once. You can tweak them as you go along. If your benchmarks change, you can go ahead and make the adjustments. But once you make the adjustments, you don't have to go every single day. And you can schedule these tests, which will run a schedule. And then you can also enable them on demand. The other thing that we have here is what we call operational rules. And in the operational rules, we define basically key pair values. And in key pair values, what happens is there is OpenSec has very big config files. So against each deployment that you have, I could, for example, set up values that I'm just going to try to configure something here. This is false. Default, Napoleon, and that's a controller node. And I'm basically saying, go track this, right? So every five seconds or every few seconds, it's going to go and make sure that these values remain the same. The reason we did this, that when a problem happens, what is the first natural question that people ask? Who went ahead and who changed what? So this is actually to create an answer for that. And through us tracking these operational rules, in case somebody goes and has changed something, we will actually tell you we will create an alert. And through this alert, you can say that, hey, my expected value was false. And by the way, actually nothing is set. So that's why it became an error. So again, our intent here is to make sure that we make the environment that you have or the deployment that you have highly predictable and make sure that it is available through these values. The other area, which is actually as beta for us right now that we are looking at, which is critical VMs. And what this really means is that in your deployment, either you could actually look at a tenant or you could actually get a subset of instances or machines that you feel are running your production application. And the question here is that if anything goes wrong with any of those machines, I want to know instantly. So here we are providing visibility related to that specific question. And as I mentioned, this is in beta phase right now. But we will give you a full audit of those machines if there is an error associated with that. For example, in this case, these alerts are coming in. And if I basically go to one of the alerts, I can tell you the exact error trace, the instance ID, the user that it belongs to. And I can give you a full audit trail of that VM. When was it created? When was an error that occurred? And when was the instance deleted? So this gives you a full picture and visibility of what is happening in your environment, especially the critical VMs. Or if you have an important tenant, that you say, if anything happens with this tenant, I need to know instantly. If any VM fails, or if you're migrating VMs from one hypervisor to the next, and during that process if something fails, we are tracking all of that. And in this case, it's instant. So within that second, as it happens, we are actually ingesting a lot of data from the deployment. And we are making sure that we report it as soon as these things happen. So the whole idea of chai is to, literally, the meaning is the word chai tea is grab a cup of chai, sit back, relax, and let us do the hard work for you. Let us try to remove the manual efforts. Let us try to remove, you know, shorten the time that it takes to do some of these things. Because today, I can tell you that a lot of these things that we have felt at ourselves, this happens manually. So we are trying to automate a lot of the things, make the environment more predictable. At the end of the day, we want to increase the availability of your OpenStack deployment. So you can add them. You can add from a single pane of glass. You can perform all these functions. So people have written, like, small tools here and there. But that doesn't give you a common interface and a common place to actually get your answers. So this is to provide both reactive and proactive support for your OpenStack deployment and worry about the day-to-issues and problems that are occurring. Again, we are striving to make more additions to this platform in terms of real-time analytics as it happens. And we believe that with what we have today, we can make your life a lot easier, especially if you are in the DevOps and support role to make sure that the environment is highly predictable. We do have a booth in one of the right corners, I believe, it's B16. And please do come by. We can discuss in more detail with you on some of the areas or components that we have built, how you can add your deployments, how we can actually do this today. This is a software as a service. So you can go sign up and start using by going to chiab.net or vanclouds.net. From either side, you can go to sign up. We have a 30-day trial package that you can start using. The other thing that we have also is to, in case you require a private deployment, we can create that also in a SAS manner, but private to you, both front end and back end will be private and you can attach your deployments. The other thing that we have is some customers have asked us to see if we can deploy this on-prem and we do have that model as well. So with this, I'm going to wrap up my session and if there is any question or comment, as I mentioned, please do feel free to come by or queue. We are here for the next whatever time today. The time is left in the summit. And also you can contact us through our website and also we would be happy to share more details with you if you have any queries. Thank you very much.