 From London, England, it's the Cube. Covering Discover 2016 London. Brought to you by Hewlett Packard Enterprise. Now, here's your host, Dave Vellante and Paul Gillis. Back to Excel London, everybody. This is HPE Discover 2016, and this is the Cube, the worldwide leader in live tech coverage. Paul Gillin and I are here. We're extracting the signal from the noise at Discover. Kate O'Neill joins us. She is the director of marketing for the data center infrastructure group at Hewlett Packard Enterprise. And Antoine Chambille is the head of R&D at ActiveVM. Nice to see you, folks. Thanks for coming on the Cube. Thank you for having us. Good morning. So we're talking about a subject that we've discussed at Wikibon on the Cube for many years now, the intersection of transaction processing and analytics. And in-memory architectures are allowing that to happen, allowing analysis to be done in near real time. And it's really changing the way in which organizations are approaching how they use data and how they make decisions. So Kate, that's sort of my intro, but set it up for us in terms of what's happening in your world with regard to that trend. Yeah, so in the mission critical solution business, we are typically focused on two primary workloads, the transaction, you know, business transaction workloads and the analytic workloads. And for many years, you know, they're vital to a company's success clearly. And they've always been regarded as having to be always on, always available and scalable to handle, you know, the amount of transactions that go on in any enterprise. So we've always developed solutions to handle the availability and scale for those types of workloads. But as we've moved into this new world of digital transformation and a world that's data-insensitive, time-sensitive, there's a necessity for kind of real-time business. And so whereas before there was the transactional systems and the analytic systems that were somewhat separate and there was, you know, the analytics could be done, let's say days and weeks, and that insight shared with the business who could then kind of take action on that insight and weave it into those transaction streams. There's no longer that luxury of time. And so what we started to see is the convergence of transactions and analytics. Analyzing the transactional data in real-time and taking action on that data in real-time in those transaction streams. But before we go there, Antoine, just set up ActiveVM, like give us a background on your company and who you guys are and what you do. Yeah, sure. So I'm myself head of R&D for ActiveVM. And we are global searcher vendors. We really deliver operational analytics solutions to business users who need to make timely decisions on large amounts of moving data. And our product, ActivePivo, is an in-memory analytical platform, right? And in the financial industry, it has become a candle without tool, right? The majority of the large investment banks use it today on their trading desk for market and credit risk, for liquidity management. But this trend is only intensifying recently with the coming of new financial regulations, such as the FRTB, a worldwide new regulation, the fundamental review of the trading books. It's like a new framework for banks to calculate and report the risks and the capital requirement. And the new methods will require them to process amounts of data tens and tens of times larger than before. So it's become a real big data problem. Over the last two years, so much talk about Spark as an in-memory analytics platform. Spark is not sufficient for what your customers need to do. Is that right? You found a corner of the market that open source is not addressing. Yes, I would tend to agree. That's the feedback I got from my customers in this area. You know, for FRTB, if we stay on this example, right? Banks have started to leverage those big data tools, Adoops Spark, with some success for the storage. And for data preparation, maybe. But when it comes to the analytics and the calculations themselves, yes, it's been very disappointing to the business users, right? Some of them even say that they feel sent 20 years back in time. Why? Too slow? I'm sure they exaggerate, but it's not the speed or the slowness. It's the interactivity, right? Batch-oriented technologies go more and more against the aspiration of the business users. For instance, even in the presence of very complex calculations, non-linear financial calculations, a trader would like to calculate on the fly the impact of a potential deal on the capital requirement of the bank. Right now, not tomorrow, not after tomorrow's batch. Or another example would be the risk controllers in an investment bank. They like to play with the parameters of calculations, to see how it goes, to see how it moves. Changing the liquidity horizon, for instance, of a calculation. Or rearranging a booking hierarchy to see if they can optimize capital allocation. And that requires redoing the calculation again and again, interactively, in the hands of the end users, which, yes, cannot be done fast enough, even with Spark. So Kate, I mean, you were talking about sort of those two worlds coming together. The practice previously, or historically, has been, I got my transaction system and I have my data warehouse somewhere else. Maybe I have a fast InfiniBand pipe in between, but that's only recently, right? So typically it's I make a bunch of copies, put them into the data warehouse, and I'm working on, I don't know, end months old data. And then I've got a group of people that I have to go to and beg, borrow, or pray that I get some data out. So is that sort of changing? And how fast is that changing? Well, it's changing through necessity, because businesses can only be successful, can only compete if they can start to behave in real time. So if you think of a retailer, they're looking at a promotion that they're running online or in store. They want to assess, in real time, the success or failure of that promotion. It could be a lost leader. They could be putting out a promotion, thinking it's going to drag and attach other products, but they could find that during that promotional time period, it's actually not performing as they expected. So they have to adjust accordingly, or there could be inventory that they were trying to ship to certain stores. They might want to redirect it based on other demands. And so I think there's just the business necessity now for these companies to be able to integrate their analytical data with their transactional data. Now, HP is in the process of getting out of the analytics business now. What special value do you bring to this market? Well, we're not getting out of the analytics business because that's what our customers require. How we satisfy their needs for analytics will be based on the solutions we can bring to solve that problem and partnering with others who can solve the solution with us. I think you've heard through Discover. Meg is very big on that we're a partnering company. There's things we do very well and there's other things that other companies do that we want to partner with. You're getting rid of Vertica. You're getting rid of the analytics business that you have so everything will be done with partners. Is there something special in the hardware that you will bring to bear that will make HP a compelling solution? Yeah, because in our space, in the transaction workload space that we're focused on, it's think of your more traditional workloads like SAP, Oracle, SQL, those kind of scale of workloads. Those workloads, because they have such a huge kind of volume of data, both transactional data and analytic data, we look at how can we satisfy the needs for that volume. To a large extent, we can offer it and satisfy it through clustering of solutions. But in our space, and in the SAP space specifically, there's a requirement for scale-up technology, large in-memory footprints, just the way those workloads are designed, the terabytes of data needed, and then the performance and efficiency needed out of those environments, means you need a degree of specialized solution design to handle it. And so that's why in the Mission Critical Solution business, we're focused on scale-up technology, large in-memory solutions, fast fabric. So some of the machine technology, such as photonics, non-volatile memory, will start to integrate into our scale-up Mission Critical offerings. And the product line that you would use to lead with is what? So Superdomex is our kind of flagship product for these solutions. And so to answer Paul's question, ActiveVM would be the way in which you would bring analytics to that world, and it probably wouldn't have been Vertica anyway. Yeah, maybe it would, maybe it wouldn't, but probably not. Right, right. And so this is a very specialized solution. And Antoine, maybe you could talk a little bit about sort of the architecture that you bring, you know, what's the secret sauce of ActiveVM? Why is it a secret weapon? Yeah, yeah, exactly. Well, really the problem we wanted to address from the beginning was the one we talked about. The ability to do everything on the fly from the raw data, which is the ultimate solution for real-time analytics and interactive calculation. And for that, we benefited from the power of in-memory computing that's been raising in the last 10 years, and of course, multi-core processing. So our product, ActivePivot, is that it's a fusion between an in-memory analytic database and a calculation engine, both fusion together, right? And it's also built on the Java platform, which makes it very easy for developers to roll out custom calculations that we run at the full speed of the engine, generally. Okay, and so please go ahead, sorry. Well, I wonder why are you at this event? I mean, what did HPE bring? What does HPE bring to your solution that nobody else can imagine? Yeah, I could cite the example of one of our last customers, a large investment back in France. So recently, they were already using ActivePivot in many places, trading desk, market risk, liquidity, with fairly large projects, one, two, three terabytes of memory, maybe, but on standard server. And two years ago, they came to us and they said, we want you, those F4-TB calculations coming in two years. They will require 10 or 20 terabytes of data. How are we going to keep our interactive model? How are you going to help us? And through those two years, we've looked at several options together. We've looked at Hadoop and Spark. We've looked at scaling out ActivePivot on the cluster. And at the end, they also evaluated large memory servers and that was their choice in the end. Of course, it shows the superdomics from HPE. And this partnership with HPE, that's what allowed us to come up with a solution up and running for them, ready for them to start on this new challenge. And SAP HANA has been a platform that you're offering to customers as part of this solution. Is that correct? Yeah, so what we're noticing is kind of a mission critical renaissance, if you like. So these class of systems, these scale up in memory systems, very much are designed for markets like SAP HANA, right? Just because just where that solution offering is going. We've worked very closely with SAP to provide a system that can scale to the terabytes of data that's really needed to handle that. But interestingly, it's kind of exposed us to other adjacencies. And I think that's what ActiveVM is talking about. It's not just kind of an SAP type environment that needs this type of scale of analytics and transactions and memory footprint. As Antoine was saying, in the financial investment banking with the trading books and the regulatory requirements, it's just kind of given them an explosive amount of data that they have to handle and they're looking at solutions such as Superdome X for its memory footprint to handle it. So the reason I was asking this is an interesting subtext here which is that a lot of what I described before is you get the transaction system in the data warehouse. A lot of times that data warehouse in financial services is Oracle. And SAP HANA is obviously sort of the anti-Oracle. So what are you seeing in terms of the dynamic there? Is it correct that there's a lot of Oracle, obviously, in financial services and how is that shaping up? How's the battle going? Well, there will still be a lot of Oracle. It's mostly used for book and record, I would say. Transactional recording of transactions. But yes, when it comes to the calculation and the analytics, new solutions are shaping up the future, I would say. And you were talking about SAP HANA. What's the difference maybe between ActiveVM and SAP HANA if you want to know? It's the ability to do the actual calculations. Not just being SQL queries faster, but implementing the actual calculation that will give you a value at risk, a potential future exposure, or the capital requirement for FRTB. This is what's not addressed yet by traditional relational databases in memory or not. You, HP, making some noise at this conference for the first time about the machine, revealing some details about the architecture, and Silicon Graphics high performance computing. Clearly, this is a strategic area for them. As an R&D person, what excites you about the machine? Well, I spent maybe three hours talking with the engineers of the lab stand, of course. Well, what's exciting, it's two things. First, it will make it possible to run even bigger workloads in memory and in a more cost-effective way, I said. And also what the machine is bringing, oh, I mean, that's what I understood of it, is that the flexibility which will be to instantiate your application. It looks like a big pool of memory and processing where you can start on demand and consolidate your memory workloads. So without having your application bound to a server anymore. So it certainly is very exciting. What orders of magnitude improvement in processing speed do you expect to see from this pooled memory photonics architecture? I would have to run it in the lab myself, but I already know that you can expect a factor 10 in the size of the data that you could put in memory while remaining cost-effective, that's for sure. We are out of time, but I have to do a little more research here because the fundamental review of the trading book as a primary now regulation, you're saying it's global, so that's critical, it's affecting everyone and it's going to be sort of on the minds of folks. But it seems to me that if this is real-time, that the way in which you protect data is going to have to change. That sticking a purpose-built backup appliance with traditional backup methodologies for something that's real-time, transactions and analytics, isn't going to cut it. So how, first of all, is that a correct assertion that the way in which you protect the data is going to change? Is that true or have we not thought that through yet? You know, I think that all the standard ways we've done things need to be kind of looked at, right? Just whenever you do new, you know, find new ways of doing things, you open and expose other risks. And so I think there are all those factors need to be kind of taken into consideration. Okay, all right, great. I have some thoughts on that. Maybe we could talk offline. All right, we got to go. Thanks very much. It was really interesting. So I appreciate your time. All right, keep it right there, everybody, Paul Gill and I'll be back with our next guest. We're live from London. This is theCUBE. This is HPE Discover 2016.