 From Denver, Colorado, it's theCUBE. Covering Supercomputing 17, brought to you by Intel. Hey, welcome back everybody. Jeff Frick here with theCUBE. We're at Denver, Colorado, Supercomputing 2017. 12,000 people, our first trip to the show. We've been trying to come for a while. It's pretty amazing. A lot of heavy science in terms of the keynotes all about space and looking into brain mapping and it's heavy lifting, academics all around. We're excited to have our next guest who's an expert all about speed and that's John Lockwood. He's the CEO of Algalogic. First off, John, great to see you. Yeah, thanks, Jeff. Glad to be here. Absolutely. So for folks that aren't familiar with the company, give them kind of the quick overview of Algalogic. Yes, Algalogic puts algorithms into logic. So our main focus is taking things that are typically done in software and putting them into FPGAs. And by doing that, we make them go faster. So it's a pretty interesting phenomenon. We've heard a lot from some of the Intel execs about kind of the software overlay that now lets kind of, I guess, a broader ecosystem of programmers program into hardware, but then still leveraging the speed that you get in hardware. So it's a pretty interesting combination to get those latencies down, down, down. Right, right. I mean, Intel certainly made a shift to going into heterogeneous compute. And so in this heterogeneous world, we've got software running on Xeons, Xeon5s, and we've also got the need, though, to use, do compute in more than just the traditional microprocessor. And so that with the acquisition of Altera is that now Intel customers can use FPGAs in order to get the benefit and speed. And so Algor logic, we typically provide applications with software APIs so it makes it really easy for in-customers to deploy FPGAs into their data center or into their host, into their network, and start using them right away. And you said one of your big customer sets is financial services and trading desk. So low latency there is critical, millions and millions and millions if not billions of dollars. Right, so Algor logic, we have a whole product line of high frequency trading systems. And so our tick-to-trade system is unique in the fact that it has a sub-microsecond trading latency. And this means going from market data that comes in, for example, on CME for options and futures trading to the time that we can place a fixed order back out to the market, all of that happens at an FPGA, that happens at under a microsecond, so under a millionth of a second, and that beats every other system, every other software system that's being used. Right, which is a game-changer, right? That's wins or losses can be made on those time frames. It's become a must-have, is that if you're trading on Wall Street or trading in Chicago, and you're not trading with an FPGA, you're trading at a severe disadvantage. Right, right. And so we make a product that enables all the trading firms to be playing on a fair-level playing field against the big firms. Right, so it's interesting because, you know, the adoption of flash and some of these other kind of speed accelerator technologies that have been happening over the last several years, people are kind of getting accustomed to the fact that speed is better, but often it was kind of put aside these kind of high-value applications like financial services, and not really proliferating to a broader use of applications. I wonder if you're seeing that kind of change a little bit where people are seeing the benefits of real time and speed beyond kind of the classic, you know, high-value applications. Well, I think the big change that's happened is that it's become machine to machine now. And so humans, for example, in trading are not part of the loop anymore. And so it's not a matter of, am I faster than another person? Am I faster than the other person's machine? Right. So this notion of having a compute that goes fast has become suddenly dramatically much more important because everything now is going to machine versus machine. And so if you're an ad-tech advertiser, is that how quickly you can do an auction to place an ad matters? And if you can get a higher-value ad place because you're able to do a couple rounds of an auction, that's worth a lot. And so, again, with Algalogic, we make things go faster and that time benefit means that all things else being the same, you're the first to come to decision. Right, right. And then, of course, the machine to machine obviously brings up the hottest topic that everybody was talking about is autonomous vehicles and networked autonomous vehicles and just the whole IoT space with the compute moving out to the edge. So just machine to machine systems are only growing in importance and really percentage of the total compute consumption by far. That's right, yeah. So last year at Supercomputing, we demonstrated a drone, bringing in real-time data from a drone. So doing real-time data collection and doing processing with our key value store. So this year, we have a machine learning application, a Markov decision process where we show that we can teach cars how to scale out a machine learning process and teach cars how to drive in a few minutes. It's pretty, teach them how to drive in a few minutes. Right. So that's their learning. That's not your somebody programming the commands. They're actually going through a process of learning. Right, well, so the key value store is just a part of this. We're just the part of the system that makes the scale out so it runs well in a data center. Right. And so we're still running the Markov decision process and simulations and software. So we have a couple of Xeon servers that we brought with us to do the machine learning. In a data center would scale out to be dozens of racks. But even with a few machines though that for simple highway driving, what we can show is we start off with the systems untrained and that in the Markov decision process we reward the final state of not having accidents. And so at first the cars drive and they're bouncing into each other. It's like bumper cars. But within a few minutes and after about 15 million simulations which can be run that quickly is that the cars start driving better than humans. And so I think that's a really phenomenal step is the fact that you're able to get to a point where you can train a system, how to drive and give them 15, man years of experience in a matter of minutes by the scale out compute systems. Which is pretty, because then you can put in new variables, right? You can change that training and modify it over time as conditions change. Throw in snow or throw in urban environments and other things. Absolutely right. And so we're not pretending that our machine learning application we're showing here is an end-all solution. But as you bring in other factors like pedestrians, deer, other cars running different algorithms or crazy drivers is that you want to expose the system of those conditions as well. And so one of the questions that came up to us was so what machine learning application are you running? So we're showing all 25 cars running one machine learned application and that's incrementally getting better as they learn to drive. But we could also have every car running a different machine learning application and see how different AIs interact with each other. And I think that's what you're going to see on the highway as we have more self-driving cars running different algorithms. We have to make sure they all play nice with each other. But it's really a different way of looking at the world, right? Using machine learning, machine to machine versus a single person or a team of people writing a piece of software to instruct something to do something. And then you got to go back and change it. This is a much more dynamic real-time environment that we're entering into with IoT. Right, I mean the machine to human which was kind of last year and years before were how do you make interactions between the computers better than humans? But now it's about machine to machine and it's how do you make machines interact better with other machines? And that's where it gets really competitive. I mean, you can imagine with drones for example, for applications where you have drones against drones the drones that are faster are going to be the ones that win. Right, right. It's funny we were just here last week at the commercial drone show. It is pretty interesting how they're designing the drones now into a three-part platform. So there's the platform that flies around. There's the payload which can be different sensors or whatever it's carrying could be herbicide if it's an agricultural drone. And then they've opened up the SDKs both on the control side as well as the mobile side in terms of the control. So it's a very interesting way that all these things now via software can tie together but as you say, using machine learning you can train them to work together even better, quicker, faster. Right, I mean having to swarm or cluster of these machines that work with each other you can really do interesting things. Yeah, that's the whole next thing, right? Instead of one to one it's many to many. And then when swarms interact with other swarms then I think that's really fascinating. So all right, so is that what we're going to be talking about so if we connect in 2018 what are we going to be talking about? The year's almost over. What are your top priorities for next year? Our top priorities are to see, we think that FPGA is just play this important part. Is that the GPU for example has become a very, became a very big part of the super computing systems here at this conference. And so, but the other side of heterogeneous is the FPGA and the FPGA has seen almost just very minimal adoption so far. But the FPGA has the capability of providing, especially when it comes to doing network IO transactions and speeding up real time interactions. It has an ability to change the world again for HPC. And so I'm expecting that in a couple of years at this HPC conference that what we'll be talking about is the biggest, you know, top 500 supercomputers is that, you know, how big of FPGA do they have? Not how big of GPUs they have. All right, time will tell. John, thanks for taking a few minutes out of your day. Stop them by. Okay, thanks Jeff. Thanks for talking to you. All right, John Lockwood, I'm Jeff Frick. Thank you from super computing 2017. Thanks for watching. Bye.