 Prepare for the extraction point. We've been briefed on all the important stories and events in the world of emerging information. Now, it's time to extract the data and turn it into action. Live from the SiliconANGLE Studios in the heart of Silicon Valley, this is extraction point with John Furrier. I understand, but please tell us a little bit about who you are in your background. Well, I was trained first as a chemist and then in Berkeley as a nuclear scientist was doing experiments and particle physics and what happened was we actually couldn't understand the data we were taking. Now at that time it was only six-dimensionally, but so I started working only six-dimensionally. That changed quickly. That's kindergarten. But so I started working first on batch programs to try and understand it, which is marginally successful. I said, but I really don't understand what's going on and sometimes there were problems in the experiment. So I developed the first interactive system at Berkeley and then actually connected it to the real-time experiment so you could see things as they were coming in and we had such success with that about understanding. But the time it took me to do that, the six-dimensions, it had become 12. And so I had to do more. And it turns out that I spent more and more time trying to develop tools to handle complex information and when I finished my degree, I did a post doc in Nucrophilic, but then the choice came. Do I stay in physics? And I go into computing and I decided really, it was something very satisfying about understanding what was happening because now we're probably up to 15-dimensional data and so I spent, not only did I spend time developing software, interactive software, personal software, but then the machines weren't fast enough so I started designing computers. It's a moving train, right? But the reason that's interesting is that if I had a comment to other people in the field is to me the computer has always been a means to an end, to understand whatever it is I'm trying to understand. Not an end in itself. And I worry at times that people who were trained in computer science, they come up with the computer being the end result. And often I have encountered sadly people in very prominent positions that don't really understand how people use the computer. And that's sad. To me no software that I've developed, no hardware that I've developed has any other purpose than to help me and others understand. And if it doesn't do that, it really isn't worth my time and effort. So talk a little bit about this example that you shared with the audience today. This is a fascinating piece here. We developed, we started as a advanced research program at Sandia over a decade ago. Essentially it was using, at that time the catchphrase was virtual reality. It was immersive environments. And we had the idea that, I don't like the term virtual reality, but that this stuff puts you close to information. It wasn't like we were trying to create a reproduction of, say, this building. We were putting you in places where you can't go in real life. And if we do this in a way that plays to the human function, the way humans actually interact with their space, that learning would increase. And we were successful there. We were so successful that it spun out from Sandia as a private company, then went public and the successes across the board. What we did was develop a system which, when you interact with your computer there, you're bending your will to it. You interact the way it tells you to interact. You use a keyboard, even mouse clicks and stuff like that. That's not the way you do things, as I said in my talk. Imagine if you had to drive a car the way you operate your computer. That way you put some virtual buttons, let's say, that came down three-dimensionally and you would press them and you wouldn't drive the car. But you will run your computer that way. And what people have lost sight of, and one of the things I mentioned, a study done by IBM some years ago, they were interested in productivity. And so they did a test where they had a CAD program, a design program, they would give people a specific thing to do. What they didn't know was between the time you said do this, hit the key, and the time the answer came back, they had put on a little knob that varied. So they could put a delay in. They wanted to see how that delay, just between clicking the key and the result, how did that affect your productivity? Well, they started lowering the time, lowering the time, they got down to a second and productivity was shooting up. I said, okay, you know, one second response, then they lowered it. Half a second, productivity shot up. Three-tenths of a second, productivity shot up. It's like, my god, the faster we go, what happens when the computer is truly responding to you, as fast as you can ask questions, you get answers. Your whole way of working changes. It's like a video game. You become engrossed. So question for you. I mean, obviously the personal computer revolution had a lot to put in place to kind of put these static or, you know, glass ceilings, if you will, relative to the design. But with cloud and mobility, you know, Eric Schmidt has been talking about, you know, Google design for mobile. So it's an opportunity for a young, you know, smart guys in Berkeley or wherever to design the next-gen product. What would you advise them? What would you share with them and say, okay, as mobile, which is an opportunity to change the game a bit? Because now you have form factor changes. It's potentially a new car, where the PCs, the horse and buggy mobile could be the car. If it's stretched. But if there's an opportunity to influence a generation, what would you say? I mean, you know, throw it away, redevelop, build a platform. What I would say is, and based on actual real-world experience that we've had, and this may be hard for you to believe, if you provide a human interface to that data, now I'm going to be a little demanding. I'm going to tell you you need between the time I query and the time there's all come back. I better be less than a second. And the way I interact with it is not pushing buttons and stuff. You free me. Again, think about driving a car. Think about what you do when you don't look at your hands. You don't look where buttons are. And yet when you're driving a car, I mean, you're taking in, you don't think you are. You're taking in motor noise. And you say, well, I don't hear the motor noise. You let the motor make a ping and see if you don't hear it. Vibration from the road. You say, I don't feel the vibration. Yes, you do. Your mind processes the roughness of the road. It processes. And you can be talking to someone and you can have the radio on. You're doing all of these things in real time. And you're not breaking a sweat. Make the computer respond like that. If you make the computer respond like that to these large data sets, if you allow people to ask questions. And that's really the miraculous thing. No one knows what's in the data sets. No one. Even people that think they do, I will guarantee you there's surprises in them. Is there an analogy in your mind to chemistry with content? I mean, data is, ultimately, its information is different elements of data, different meanings, different databases. In a way, it's almost like a chemistry or it's physics and chemistry kind of blending together. Because if you want to have a low latency response like that, you got to have a new way to interface with the data at a root level. Absolutely, you do. And do something different. So I guess that's kind of a mind-blowing position to kind of get these young computer scientists. But it's gotten easier. When I started this work, I mean, virtual reality at the time was a big thing. I didn't and still don't like that the name virtual reality. But people developed interfaces. But they weren't trying to use them for data. They were trying to use them for Hollywood and various other things. But we've come a long way. I mean, you can buy a stereo TV now. Very inexpensive. Stereo TV. So I can show you your data. In three dimensions on your own stereo TV. The advent of many of the game systems. Which give you, which will recognize hand motions and things like that. Now that can be used as a gimmick. Every, all of this stuff can be used. But it can also be used to help you. You want to turn, you want to turn a data set around. Just do that. Yeah, what a bit is, what a bit is to a bite. You, United, think about large data sets in that kind of reference because you have to act on a whole new processor, a whole new operating environment has to kind of be created. I mean, is it a recreation? Or is it a, you know? Historically, in business, if you were a computer company and someone, two people came to you with a plan for something new. And one of them was to enhance the user interface. And the other one was make the processor 10% faster. Would you like to bet which one would get the money? And the reason is because having the processor go 10% faster is something that can go up on a sign. You can use it in advertising. Yeah, it's a gimmick. The fact that I made a user 50% faster, that's much more difficult to quantify and they didn't put the money there. They, because they didn't think it would be bringing returns. But we are reaching the point now, the whole conference, you're drowning in data. And I will tell you from firsthand real-world problems, we have accelerated people's comprehension and understanding of data three orders of magnitude, 1,000 times. Can you give us an example? Good, is that good or bad in your mind? Oh, that's fantastic. You ought to see. Three orders of magnitude. Three orders of magnitude. It's Moore's law for the brain. I'll give you an example. I'll give you two that sort of illustrate it. In the first case, we had a company that will remain nameless, one of the largest chip manufacturers in the country, that had prototyped a- AMD. I'm not telling you who it is. Amp and Intel. That had prototyped a new chip and they had five different programs that run analysis on it. Vibrational, heat, electrical. And they were trying to figure out, you know, they had screens that they could bring these data up on. We fused it, turned it into a chip, allowed you to fly around it. The engineer in charge found that there was a flaw in the design in 15 minutes and corrected it. Didn't know it was there. 15 minutes. They'd had it for four months. That's not the story. The story is, as an item of curiosity, we actually queried people in the company. Who knows the least about electricity? One gentleman volunteered his wife. And said she still blows the circuits out in the house. So we actually brought her in. We put her down in the same model. We did not talk about volts or amps or circuits or asics or anything. There was color flowing over this. There was sound and she could fly and she could touch things and things would happen. She really didn't know what she was doing. But you know what? Something went red. Sounds went on. What the hell? What's going on? What happened? Figured out. Look, if there's something going on here, she found the problem. And she suggested a solution to that problem. Now the only difference between her and the EE is it took her 30 minutes instead of 15. That's fascinating. So the human mind is its own processor. So what you're getting at is that the current state of what computers have been designed for is like a horse and buggy. And people get funded based upon certain standards. But a new standard kind of needs to evolve. I'll give you one more example in a different area. This one I can't mention because enough time has passed. Roger Pinsky and Goodyear Tyron-Rubert. Pinsky was running race cars obviously. And he was losing races. Not but a lot by fractions of a second. And but it was continuous. This was bad. Why are we losing races? I couldn't figure it out. So they instrumented the car. They put telemetry on it that would, NASA would put to shame. They broadcast it on five different tracks, full races, brought all the data back, set a team of people down. Okay, why are we losing races? Two years later, they hadn't a clue. Now they were using the same sorts of interfaces, frankly that you see here. They spread graphs, comparative graphs of all the stuff sliced this way. But they didn't know what they were looking for. They just knew they were losing races. Somewhere in that information was the answer to why they were losing races. Well, Pinsky, after spending several million dollars, said okay, we're getting nowhere. I'm pulling the plug. And as a last resort, they came to us. And it took us about two months to build the model with all that data in it. All of it. Simultaneous, 20 dimensions. There wasn't a number showing. There was no graphs showing. None of that. Wheels on the car would morph in size as the pressure changed. You'd think it's cartoonish, but everything that was happening was exaggerated so that as you drove the car, you could see it. You could experience it. Five minutes, they found the answer. Two years, nothing. But the computer didn't find it. The human mind found it. They just had the data in front of them. That's been a big theme in this conference is the human aspect of data curation. So in the linguistics world, you'd have to have some knowledge around ontologies, which has been a field in AI and academic, where machines can do something. But without human interaction, this data stuff doesn't work because there is an element of humanness that needs to interface with the machine. And we've heard a little bit about from some of the data science guys, but I never thought about it at that level. Even Joseph Turian, who was here, he was talking, we asked him about Watson. He said, well, interesting thing about Watson. There's humans involved in optimizing this, not just machines. I gave an example of something that's going on at the University of Washington called Folded. Folded, they had a program. They're trying to figure out how proteins, complex proteins, because they fold together. And it's important for them to understand how they're folded. That affects how they interact with other things in the body. And so they have a computer program, been developed for years, that tries to figure out how a given protein will fold. The guy thought, well, I wonder how humans would do on that problem. So he got a bunch of gamers. They had done, they had 10 problems that they'd submitted to the computer, gave the same 10 problems to the gamers. And the results went like this. Of the 10 problems, gamers beat the computer five times, tied it three times. Two of the times they didn't get a solution at all, and the computer did. Unfortunately, the computer solution was wrong in both of those cases. So their conclusion from that is actually the best way is a human-computer interaction. Let the gamers, I mean, you can go through why that happened, what gamers were willing to take risks that the computer weren't. They had a long view. They changed strategies when a strategy wasn't working. They didn't keep crunching. And they even knew how to start the problem better than the computer did. So if you want, and that applies to the big data, if you want to have effectiveness, work at a human-computer hybrid, which means you want the human to act with optimal efficiency. You want them to really be engaged. And I think computer gaming is an example. People get hooked on the games because of the feedback it is. Where are you guys located? You in the Bay Area? New Mexico. New Mexico. Okay, so... To the card. New card. We'd like to keep in touch with you. We really appreciate the insight. And I really think it's quite relevant as data science, and computer science, and social science, and cognitive science kind of all intersect here. And again, that's the word. I mean, the embryonic stage of this entire revolution, but mobile is driving a lot of change. So really appreciate the insight and experience, and love to have you back and kind of mentor some of the younger. Oh, thank you for letting me. It was fantastic. So the company is Event Horizon. It's eventhorizoncorp.com. And this is Dr. Kreev Maples, who's the CEO. So thanks.