 Hi everybody, this is Dave Vellante at Wikibon headquarters. Welcome to CUBE Conversations. You know, there's a big discussion in the industry and the big data field about analytics and how to apply analytics to improve infrastructure. And a lot of people are talking about applying machine learning to make infrastructure better, to automate infrastructure, to remediate challenges and problems. We're here with Jerry Melnick, who's the Chief Operating Officer of SCIOS. And this company has a new announcement in this space. Jerry, welcome to theCUBE. Thanks, Dave. So historically, SCIOS is known for their expertise in high availability. You're making a new announcement today, but give us the high level of SCIOS. So SCIOS, you know, publicly held company in Japan. We are the US subsidiary here based out of San Mateo and responsible for R&D that's going on and the sales and marketing of products in the high availability space. And, you know, we've been in this business for 15 years helping customers deliver SLAs and deliver important capabilities to their customers and users and we're making an announcement about SCIOS IQ or analytics software, which really is an extension of that line. So we do a lot of these, a dupe in big data shows. Everybody's talking about machine learning. They're talking about applying analytics. It's the machine learning is the future. SCIOS IQ, here today, here now, tell us about it. Okay, I'll tell you a little bit about where, how we came to it. I mean, we were, again, you know, our customers are high availability customers, moving critical apps into cloud and virtualized environments in a big way. That's happened over the last two or three years. Late to the party, you know, just starting that move much later, but taking advantage of the flexibility, they were used to very well defined, well constrained deterministic environments in which those applications always recover, always produce certain types of performance and they come to move it to virtualization environments and now they have the dilemma where how can they achieve the SLAs that they had in those physical environments also be able to take advantage of the value of the flexibility of the virtual and cloud environments, but they ran into a wall. And the wall, quite frankly, was about, you know, this is not the deterministic environment of physical service. There's lots of things going on, a lot of complex interactions and we saw the opportunity to help them by applying intelligence to the operations of how they're operating there beyond what we're doing today in our current HA products. So essentially you've got this black box environment when you went from a bare metal to a virtual system, but it talks about the IO blender, you really don't know what's connected to what. So that's part of the problem that you're solving, right? Yeah, that's right. I mean, you have those interactions, you have complexity, complexity that, you know, if we showed you the graph that we developed on the various interrelationships, you would see that these are really complicated systems and to be able to understand what's going on and really be able to deliver on performance, efficiency, reliability and capacity, needs of the customer is really hard. I mean, we talk to practitioners all the time in the early days and still, they were just so frustrated at the time that it would take for them to diagnose the problem, remediate the problem, they'd have to bring in their best and brightest people. Are you able to compress that cycle and simplify it so I don't need rocket scientists to solve that problem? That's right, we have the rocket scientists. We are automated that in two minutes of machine learning. That's exactly right. I mean, we looked at this problem really carefully. We talked to customers involved very much in the beginning. We realized that, you know, there were a lot of complex tools out there and exactly as you said, it was a war room. Basically when a problem would occur, a customer would call up, the database guy would get the notice that, you know, the application's running too slow. What am I going to do? And then they go into triage and triage require multiple people. So how do you simplify that? How do you understand what's going on there? And instead of using a dozen tools, use one tool to help you find where to go first so you can fix the problem and address that need as fast as possible. And that's the problem we wanted to solve very much from a real time. So simplicity was the key that we were looking at the beginning. So how's it work? Do I have to install agents? You got some other magic sauce that you're using. Now remember that, you know, we said simplicity was the key. So right after the install, no agents, you know, you're basically five minutes to download and running. It's got a simple mobile touch interface, really one click operations to get answers to very complex problems in very difficult environments, right? And then it's gathering information and learning patterns of behavior over time and becoming more and more intelligent as the operations include so it can help you figure out what's going on. And you're starting with, is that right VMware? Do you understand that? That's right. Expanding up from there. That's right, we built a platform, an analytics platform, very different from a lot of other implementations in this space. And we thought of it rather than, you know, adapting existing tools, even our clustering tools for that matter, rather than adapting those, we built a solution straight up from the bottom up where it's an analytics platform. We can acquire data sources from the infrastructure, from the application. It's built to be extensible, expandable. It's built to be very intelligent and using machine learning, not just one algorithm, but any numbers of algorithms specific to what the questions we wanted to get answered. So you've collected the data, you've integrated it, and now you can ask the questions and correlate and get responses. So am I right that the IP is not the algorithms in and of themselves, it's how you're applying the algorithms? It's really how we're putting the system together. Really three pieces, data aggregation, machine learning analytics, and then the delivery of that information in a highly consumable format in a touch mobile environment, which is touch mobile UI, which is really, you know, unprecedented in the industry quite frankly. So, and it's got to be real time essentially. That's right. That's right. We're continually learning. You know, we're observing patterns behavior. We are then looking for anomalies, reporting on those and helping you identify why there's a problem and exactly how what you need to do to fix that. So, I want to stay in the tech for a second if we can. You have patents around this? That's right. We have patents around a number of areas. You know, certainly we have technologies, really three major technology areas that we've developed here. One is our V-Graph technology, which is patent is how we provide a relationship graph using topological analysis that automatically figures out all of the various objects in the system, how they're connected, and then how they're related. Then we have machine learning technologies that we employ with a variety of other algorithms that utilize in conjunction with that V-Graph technology, figures out sort of root cause analysis and pattern analysis. And then on top of all that is our presentation technology, which we call the Psyos PERC, P-E-R-C, performance efficiency, reliability and capacity, instrumentation that deliver the goods. So that's your dashboard. That's our dashboard. Okay, yeah. And you've patented that? That's right, because of the way that we presented it. This is not a simple thresholding of CPU or memory. This is an aggregated information and that dashboard really represents four indicator lights like you would see on your car that indicate the health or quality of service that you're delivering across your infrastructure. And so in the dimension of performance, the dimension of efficiency, how efficiently you're using it and the dimension of reliability, whether you can suffer a failure and recover from that and capacity, where you're going and whether you have enough capability in your infrastructure to go further. So, and you've patented the math behind the aggregation? Is that right? We've patented how we've put it together and we certainly are using unique techniques. You're not patting the math but there's certainly a lot of math involved, right? That's right, that's right. We're not using one super algorithm we're using a variety of machine learning and other statistical techniques. Interesting, and so there's a lot of things that sound different, but maybe you can summarize. How do you guys position this relative to the competition? Customer says okay, how are you different from everybody else? How would you summarize that? And so we're a true analytics play that's delivering the goods from the bottom all the way to the user. And we really see ourselves as an overlay where we can consume a variety of information whether it's from the hypervisor, management tools, a variety of data sources, whatever we need to really solve the problems in the area of performance efficiency, reliability and capacity. And then the real differentiation is from that very sophisticated machine learning and time understanding of your environment we can look at very specific solution and use cases where we answer very difficult questions again in complex environments. Where do you fit in the stack? So we fit sort of in between everything. We're acquiring data from the infrastructure, compute, storage and network as well as from the application environment. You are not an application performance monitoring tool. We're not a deep infrastructure tool. We really see ourselves as the thing that you pick up first to be able to understand where to look when a problem or an issue occurs and how to understand where to go next. So when I get the feedback, looking at the dashboard, it's telling me something. Take me through an example of how a customer would take action and create some kind of outcome. Sure, very simple example. So for instance, in our performance dashboard we'll see a trend analysis of performance issues. Now those issues are aggregations of anomalies that have occurred. They're not simple CPU, one CPU threshold was exceeded and we've automatically established thresholds and patterns. We understand the relationship between of virtual machines, it's networks and other virtual machines. We've watched those patterns and studied them over a period of time. And when we see an anomaly, whereby that series of objects no longer acting the way they're expected to the actor, we call it out. And we say, you know, this is an unusual behavior in your system, you need to look at it. And you can with one click look at that and we will show you the objects that are in fact impacted by this anomaly and we'll show you the root cause of what happened and why, associated with the event that caused it. So you can see within one click really what happened in that pattern. And once I know the root cause then it's straightforward action to take. And we also incorporate this notion of semi-supervised learning, which means if you know better that this anomaly isn't really an anomaly, you can train the learning set. So now we've incorporated your knowledge into ours and together, we've automated this even further. Okay, so essentially every customer who takes advantage of that capability is building their own sort of custom intelligent machine. Yeah, I mean it's for those environments, certainly. Most of the stuff, you know, it's going to run automatically. You're not setting thresholds, you're not, you know, tuning it. It's learning over time and anomalies, you know, are acquired and it's intelligence get better over time. So, and this is available as a subscription, is that right? That's right, we're licensing it as a subscription. It's an on-premise model. We talked to customers again in the process. We did a lot of interaction with customers and they told us that they were not comfortable, necessarily, with sending that data off-site. IT people really are concerned about privacy issues and security. So we decided an on-prem model was the best, at least for now, and that what we wanted to do, though, however, was deliver new functionality against those data sets as fast as we could. So we were sprinting out, you know, every four to six weeks, new releases, in fact, in the user interface, you see a little red button shows up. When there's a new release out there, you can press that and it'll load the new software and then you're off and running again with new capability. So we're going to be evolving this software very quickly. It's available today. It's available today. So you're using the DevOps culture to deliver the updates and the code. That's right. Two-week sprints and that's right. Okay, so give us a little glimpse of what we can expect in the future in this sort of general category of technology. Sure, I mean, we're starting off, I mentioned four dimensions, performance, efficiency, reliability, and capacity. We're starting off because what our customers told us, performance was a thing we deal with every day. Help us there. Efficiency certainly is cost-related. You know, am I using my system? Can I consolidate more? Am I using too much memory? We've done a few interesting use cases. One-click analysis of what virtual machines are idle, for instance, very simple, very hard problem actually, as it turns out in very large environments. Reliability, you know, people really don't have a good way to figure out whether their system can sustain a failure or how many hosts can be failed or what the right recovery plan is in these environments. We can help with that, giving the data that we acquired. And then capacity, you know, where this really goes, we're starting off really helping the IT operations guys make sure that the thing's running. Where it really goes is it also can help the CIOs over time with capacity, planning, understanding, you know, where the costs are in my system, understanding how much an application really costs in real time, whether there's other ways in which you can deploy that application. Because we're, since we get to understand your application, what resources it uses, what impacts it has in an environment, we're in a unique position to be able to project, simulate, and recommend the best solution and where to put it. You just got the data. We got the data. All right, Jay Malthick, we'll have to leave it there. Thanks very much for coming on theCUBE and sharing the CIO-CIO announcement. Good luck and we'll be watching. Thank you. All right, thanks for watching everybody. This is CUBE Conversations. Dave Vellante, we'll see you next time.