 It's theCUBE covering HPE Big Data Conference 2016. Now, here are your hosts, Dave Vellante and Paul Gillan. Boston everybody, we are here live at the HPE Big Data Conference. Hashtag, seize the data. This is theCUBE, the worldwide leader in live tech coverage. Tom Mag is here as the Vice President of Business Development at Spirant, a customer analytics and customer experience analytics company. Welcome to theCUBE, good to see you. Thank you. Thanks for inviting me. Actually you're a hundred plus year old or no, not quite a hundred years old, right? But 80 years old. Yeah, Spirant has a long history starting out in. Testing company, but you came in through the customer experience analytics side of the business. That's correct, yes. Yeah, Spirant has a long history starting out in the 1920s with installing line equipment on poles for electric service. Most recently they've morphed into concentrating on test analytics across the board, but primarily on communication service providers. And then three years ago, Spirant purchased DAX Technologies, which is a company I came in with, that does customer experience management analytics. So they wanted to move from the lab environment where they're doing testing, making sure pre-launch, making sure that communication service providers can launch high quality services to actually getting into the live network and being able to monitor that to ensure the continuation of the quality. What's that like working for a company that's been around that long, that started in an area that's completely different than what you do? The culture, what's the culture like that can allow a company to thrive for that many years? Well, it's innovation, right? I mean, it's being able to, and when we came in, when I said they've moved to test analytics, that's probably about 30 years ago. So the culture had moved from that, from the early days to more of the high tech. But they do have the ability to innovate, bring in new customers and go in the direction that the industry is going to, obviously maximize profit. Go where the puck will be. Yes, exactly. Dave and I had a call a couple of days ago. He was driving into the conference and I was on my office line and the call dropped I think three times during that half hour call. One of the things you're doing is trying to measure customer experience, things like call quality. But customers don't actually tell you when there's a customer experience problem. How do you determine that? Oh, that's an excellent question. So when your call dropped, basically what our solution does, it looks at customer experience from a network quality point of view. So when we deploy it a service provider, we're collecting all of the metrics around that call. How quickly did it get set up? Did it stay up? Did it drop? Why? So we can determine if you are impacted from a quality point of view, but those are all objective measures, right? How do we know if it's one drop call a week or an hour is going to cost you to be dissatisfied as a customer? So we take those objective measurements and then we pull in that data and now we can be collecting 500 to 1,000 events per subscriber per day, take 100 million subscribers, 50 billion to 100 billion messages a day. That's where the big data comes in. That's where we leverage Vertica, the ability to pull that data in and analyze it, aggregate it. But we take those objective measures and we baseline them using Vertica R and other analytics against subjective information about your customer experience. And examples of subjective information. I mean, it could be a customer survey. I mean, if you called up and complained, that's an important piece of information. But a lot of customers we find out, don't wanna, it's painful to dial 611 and go through 50 prompts and you're not sure if you're being heard or not. We know how it is. I mean, if you're bored in the car and you want some, if you're driving them, you want somebody to talk to you about work. But what we do is look at churn. I mean, that's a strong indication. So if you churned within those events occurring, that's NPS, also social media too. Unstructured data we can pull that in if you're complaining, we can relate that to a particular subscriber. We can then formulate a relationship between what we call KPIs, key performance indicators. So key performance indicated would be your percent successful calls. So we can formulate a relationship between that and the subjective measures. How often do people churn and what value of how important is call drop to customer churn. So we use Vertica, we use R to be able to come up with a attribute importance. And typically operators, they could be measuring 50, 100 different KPIs. But what we discovered when it comes down to it is it's five or 10 of them that really make a difference. And using the analytics, we could tell which five of those 10, five or 10 of those make the difference and then be able to weight those. So we put it back into what we call a QOE quality of experience formula. We use those weights to try to accurately best fit prediction of your quality of experience versus what the network's showing. Okay, so Tom, I'm a service provider. I like what I'm hearing. How do I engage with you? Oh, so our model, we have a couple of different models. One is pure onsite software where you would engage with us, we would analyze, and one of the keys I mentioned collecting 500 to 1,000 pieces of data per call transaction. Usually it's in multiple systems. So if you're a service provider, you probably have the data. You probably have a lot of this data, but it's all over the place. So one of the things we would come in and we would analyze, where is the data that we need from a customer experience management point of view? How do we go about obtaining that, pulling that in? Perhaps some of it's sitting in a Hadoop data lake. Vertica, perfect. We have a Hadoop connector where we can pull the data. We're not going to replicate the data. What we're going to do is we're going to pull that data, aggregate it, and look for the hotspots in that, and just locally process the aggregations. So we're not replicating that. When you say pull it in, you mean inspect. Inspect, yeah, yeah, okay. Yeah, we're going to pull it in, run it through Vertica, do the aggregates, which, so out of the 50 billion records a day, you're not going to run that through R. What we're going to do is we're going to take an aggregate and come up with KPIs and KQIs and QOE from that, those records to then run it through predictive analytics. So I would acquire an appliance, for example, from you, or a service from you? Yeah, so two things. One model is, what we have is a perpetual license where you would acquire the software. We would deploy that in conjunction with HP hardware, or we're just open vendor point of view. And you would run that onsite locally. The other model we have is more of a services, which is a annual license model, where you can buy that on a subscription basis. Okay. Typically, with the volume of data, unless you're a smaller service provider, usually the solution is hosted within your firewall, also because of privacy. We are analyzing this purely from a network, customer experience quality point of view. But there is obviously information in the data that's private nature, right? Who you are, who you were calling, all that. We don't need to know that, we just need to know whether it's dropped or not. So a lot of times that's kept in-house. It could, some of the small deployments that we have done this, it could be cloud based. Especially if you just want to keep the aggregates maybe across, if you have multiple properties, you want to keep them in one place and analyze and compare how one country is comparing to another. And those would be abstracted aggregates that wouldn't have privacy data that could be pushed up to the cloud. Telecom companies gather an incredible amount of data, it seems to me. I mean, every bit that goes over the air or over the wire, they technically can gather that. What do you see these companies doing that's interesting? New things that telecom companies are doing that's interesting with data. We see a lot of interest in getting the advertising now. You know, Verizon wants to be a media company. Are you seeing whole new business models emerging out of all this data? Yes, so our focus has been from a network quality, point of view, customer experience, understanding impacted customers, impacted revenue. But there's a whole marketing angle. So we see lots and lots of requests coming in from a marketing point of view. And a key question there, again, goes to privacy, what opt in, opt out. But let's say you take that off the table, you can analyze the data and determine what services over the top services subscribers are using, what the performance of that is. As well as looking at and categorizing each type of subscriber. So there's a wealth of information from a marketing point of view that's available in the data. Because when it comes down to it, every website you go to, again, we're looking at that from a network performance point of view, but every website you go to can be captured and can be analyzed. How are people making the business cases? Is it mostly focused on churn reduction? You mentioned NPS, so take us through the business case. Yeah, sure. So let me give you an example of one of our use cases where we did a deployment for a major carrier here in the U.S. They were very focused on quality. They had what they call a tier three center. So you would call in and you would get a customer care agent. And if there was any issue with your device set up, they would walk you through that. But it happened to be network quality related. You'd drop in calls, your data is slow. They would typically create a ticket and that would go over to a tier three center. And they were getting about a million of these a month. And they, again, with the data silos of not having it all in one place and processed in one place, they'd have to look through five to 10 systems to try to resolve your individual problem. And that, the time in motion study that would take an hour, hour and a half. So we put our solution in, we pull all the data together from actually 30 different systems in this case. So that as soon as the ticket's created, we're already analyzing that. So when they pull up the ticket, the information's there as far as what the ticket's about. Analyze every record with flagged ones that failed and then look at it across resources. Is it an issue, like a good example is if you're having a problem and you're at a particular cell site, are all the other subscribers at that cell site having a problem? Well, if it is, it's probably cell site related. If it's only you, it's probably related to something you're doing and something different. So we have a rules engine that actually processes and make recommendations on how to resolve that. So it took it from an hour, hour and a half down to five or 10 minutes. And you can multiply that out, a million times, cost of labor and all that. I mean, it's saving 30 million plus a year. That's telephone numbers, right? So that's a productivity play right there. That's a productivity, yeah. That's customer satisfaction. So another example is, yeah, device returns is a big, first for a operator to take back a device, it costs anywhere from $100 to $200 to basically reverse logistics and get a new device out to you. And we've had, we're working with operators where they do a million device returns a month. So a million times, you multiply it out and it turns out anywhere from 20 to 40% of those are what no trouble found. It wasn't an issue with the device. It's something that, potentially it was a network issue, but you're a customer. You're gonna, you're issue, you're dropping calls, you're just gonna yell until your device replaced. So we actually use the R in predictive analytics from Vertica to predict and make recommendations on whether that's a valid return. It looks like it's a device-related issue. You got to return it. Or it's not. It's something that could be proven to be a network issue to reduce device returns. And you can see the economics there. If you can reduce, if you find half of those no trouble found, you know, it's tens of millions of dollars of savings. Right, okay. So let's talk about the case study, if you will, of sort of how you work with Vertica and paint a picture of what you guys are doing internally, like what's your stack look like? I mean, what's the solution look like? How are you using Vertica in your delivery? Yeah, so we have a three tiered stack. So the first tier is where we're gonna ingest the data. And we're gonna, Vertica, previously, before we added Vertica as one of our options, we had another database vendor that we were working with, the large one, beginning with O. And one of the issues was the ability to load the data in there quickly, aggregate it. There's two keys to our customer. One is latency. And that's when the network event, when you drop your call, how long does it take before we recognize that and can make recommendations on how to resolve that? That's latency. And the other is when we make the recommendations, how quickly, when you go and ask, you know, phone number one, two, three, how quickly does that analysis come back on the screen? Your response time. And the goal on the latency is to get that near real time as possible. And the response time, the responsiveness, you know, if you're a customer care rep, you have somebody on the phone, you want that information up there within, you know, under five seconds. So one of the keys that Vertica enabled us from a performance point of view is to be able to ingest that data, get it quickly in a database and reduce that latency. And so we pull the data, we ingest it, but the key is not, it's like a needle in a haystack. The 50 billion rickets, you know, aren't going to tell you anything. It's really aggregations, the hotspots, all of that. And that's where Vertica with live aggregate projections came into play. It allowed us to quickly aggregate the data. So you can aggregate it into KPIs, KQIs and QOE and then find the hotspots. Because obviously the quicker you find, and this is proactive, the quicker you find that hotspot that might be affecting, you know, five or 10, 100 different subscribers, the quicker you resolve that, the quicker you are going to be avoiding calls from everybody else that would have been impacted had you not resolved that. And that's key, the next thing would be predictive and that's where we're moving more in the future model, predicting, being able to predict your problem before it actually occurs to anybody and resolve that. Like anticipatory action. Exactly. Good, okay. What do you got going on at the show here? You got, you know, things going out at the expo, you got to talking to customers. Yeah, we're here talking to customers. We noticed there's a lot of service providers here and we have a team of engineers here that are just taking in the new capabilities to, you know, from a future point of view predictive to, you know, haven exact, you know, things along those lines to be able to augment our capabilities with those vertical features. Excellent, all right. Tom, we'll leave it there. So thanks very much for coming on theCUBE. Oh, thank you. Best of luck going forward. All right, keep it right there. Everybody will be back with our next guest right after this short break. This is theCUBE. We're live from Boston. We're right back. To win championships, we are here to win races. In formulary.