 from Denver, Colorado. It's theCUBE, covering Supercomputing 17, brought to you by Intel. Hey, welcome back everybody. Jeff Frick here with theCUBE. We're in Denver, Colorado at the Supercomputing conference 2017. I think there's 12,000 people. Our first time being here, it's pretty amazing. A lot of academics, a lot of conversations about space and genomes and heavy lifting computing stuff. It's fun to be here. And we're really excited. Our next guest, Karsten Rohner. He's the CEO of Swarm64. So, Karsten, great to see you. Yeah, thank you very much for this opportunity. Absolutely, so for people that aren't familiar with Swarm64, give us kind of the quick high level. Yeah, well in a nutshell, Swarm64 is accelerating relational databases and we allow them to ingest data so much faster, 50 times faster than a relational database. And we can also then query that data 10, 20 times faster than a relational database. And that is very important for many new applications like IoT and banking and finance and so on. So you're in a good space. So beyond just general, better performance, faster, faster, faster, we're seeing all this movement now in real-time analytics and real-time applications, which is only going to get crazier with IoT and internet of things. So how do you do this? Where do you do this? What are some of the examples you can share with us? Yeah, so our solution is a combination of a software wrapper that attaches our solution to existing databases and inside there's an FPGA from Intel, the Aria 10. And we are combining both such that they actually plug into standard interfaces of existing databases like in Postgres, the foreign data wrapper, the storage engine in MySQL and MariaDB and so on. And with that mechanism, we ensure that the database, the application doesn't see us. For the application, there's just a faster database, but we're invisible and also the functionality of the database remains what it was. That's the net of what we're doing. So that's so important because we talked a little bit about offline, you said you had a banking customer that said they have every database that they've ever been created, right? They've been buying them all along. So they've got embedded systems. You can't just rip and replace. You have to work with existing infrastructure. At the same time, they want to go faster. Yeah, absolutely right, absolutely right. And there's a huge code base which has been verified, which has been debugged. And in banking, it's also about compliance. So you can't just rip out your old code base and do something new because again, you would have to go through compliance. Therefore, customers really, really, really want their existing databases faster. Right. Now the other interesting part, and we've talked to some of the other Intel execs, is kind of this combination hybrid of the hardware-software solution in the FPGA and really opening up an ecosystem for people to build where software-based solutions that leverage that combination of the hardware-software power. Where do you see that kind of evolving? How is that going to help your company? Yeah, we are a little bit unique in that we're hiding that FPGA from the user and we're not exposing it. Many people actually, many applications expose it to the user. But apart from that, we are benefiting a lot from what Intel is doing. Intel is providing the entire environment, including virtualization, all those things that help us then to be able to get into cloud service providers or into proprietary virtualized environments and things like that. So it is really a very close cooperation with Intel that helps us and enables us to do what we're doing. Okay, and I'm curious, because you spent a lot of time with customers. You said a lot of legacy customers. So as they see the challenges of this new real-time environment, what are some of their concerns? What are some of the things that they're excited that they could do now with real-time versus batch and data lake? And I think it's always funny, right? We used to make decisions based on stuff that happened in the past that were kind of querying. Now, really the desire is to make action on stuff that's happening now. It's a fundamentally different way to address the problem. Yeah, absolutely. And a very, very key element of our solution is that we can not only insert these very, very large amounts of data, that also other solutions can do, massively parallel solutions, streaming solutions, you know them all. They can do that too. However, what the difference is that we can make that data available within less than 10 microseconds. So... 10 microseconds. A data set arrives within less than 10 microseconds. That data set is part of the next query. And that is a game changer. That allows you to do controlled loop processing of data in machine-to-machine environments and things for autonomous applications and all those solutions where you just can't wait. If your car's driving down the street, you better know what has happened, right? And you can react to it as an example. It could be a robot in a plant or things like that, where you really want to react immediately. I'm curious as to the kind of value unlocking that that provides to those old applications that we're working with what they think is an old database. Now you said, you know, you're accelerating it. To the application, it looks just the same as it looked before. How does that change those performance of those applications? I would imagine there's a whole another layer of value unlocking in those entrenched applications with this fast data. Yeah, that is actually true. And it's on a business level. The applications enable customers to do things they were not capable of doing before. And look, for example, in finance. If you can analyze what the market data much quicker, if you can analyze the past trades much quicker, then obviously you're generating value for the firm because you can react to market trends more accurately. You can mirror them in a more tighter fashion. And if you can do that, then you can reduce the margin of error with which you're estimating what's happening. And all of that is money. It's really pure money in the bank account of the customers, so to speak. And the other big trend that we talked about besides faster is sampling versus not sampling. The old days, we sampled old data and made decisions. Now we don't want to sample. We want all of the data. We want to make decisions on all the data. So again, that's opening up another level of application performance because it's all the data, not a sample. For sure, because before you were aggregating, when you aggregate, you reduce the amount of information available. Now, of course, when you have the full set of information available, your decision-making is just so much smarter. And that's what we're enabling. And it's funny, because in finance, you've mentioned a couple of times, they've been doing that forever, right? The value of a few units of time, however small, is tremendous. But now we're seeing it in other industries as well that realize the value of real-time, aggregated streaming data versus a sampling of old really opens up new types of opportunities for them. Absolutely, yes, yes. Yeah, finance is, as I mentioned in the example, but then also IoT, machine-to-machine communication, everything which is real-time, logging, data logging, security and network monitoring. If you want to really understand what's flowing through your network, is there anything malicious? Is there any actor on my network that should not be there? And you want to react so quickly that you can prevent that bad actor from doing anything to your data? This is where we come in. Right, and security's so big, right? And it's in everywhere, especially with IoT and machine learning. Absolutely. All right, of course, I'm going to put you on the spot. So, November 2017, hard to believe, as you look forward to 2018, what are some of your priorities? If we're standing here next year at Supercomputing 2018, what are we going to be talking about? Okay, what we're going to talk about really is that we will, right now we're accelerating single-server solutions, and we are working very, very hard on massively paralleled systems while retaining the real-time components. So we will not only then accelerate a single server, by then allowing horizontal scaling, we will then bring a completely new level of analytics performance to customers. So that's what I'm happy to talk to you about next year. All right, we'll see you next year. I think it's in Texas as well. It's wonderful. Yeah, great. Thanks for stopping by. Thank you. He's Carson, I'm Jeff. You're watching theCUBE from Supercomputing 2017. Thanks for watching.