 So we have an interesting story here. Alistair came by and told me a little bit about it. Please have a seat. Kreev, Dave Vellante, good to see you again. Thanks for coming on. This is my co-host, John Furrier. Hi. How are you? Nice to meet you. So Kreev, why don't you introduce yourself to the audience and then tell us, we'll talk a little bit about the story that you have that's just pretty transformative from what I understand. But please, tell us a little bit about who you are and your background. Well, I was trained first as a chemist and then in Berkeley as a nuclear scientist. I was doing experiments in particle physics. And what happened was we actually couldn't understand the data we were taking. Now, at that time, it was only six-dimensional. But so I started working only now. Oh, believe me, that changed quickly. That's kindergarten. But so I started working first on batch programs to try and understand it, which is marginally successful. And I said, but I really don't understand what's going on. And sometimes there were problems in the experiment. So I developed the first interactive system at Berkeley and then actually connected it to the real-time experiment so you could see things as they were coming in. And we had such success with that, about understanding. But the time it took me to do that, the six dimensions, it had become 12. And so I had to do more. And it turns out that I spent more and more time trying to develop tools to handle complex information. And when I finished my degree, I did a postdoc in nuclear physics. But then the choice came. Do I stay in physics? And I go into computing. And I decided, I really like it. It was something very satisfying about understanding what was happening. Because now we're probably up to 15-dimensional data. And so I spent, not only did I spend time developing software, interactive software, personal software. But then the machines weren't fast enough, so I started designing computers. It's a moving train, right? You've never seen you move it. But the reason that's interesting is that if I had a comment to other people in the field, is to me, the computer has always been a means to an end, to understand whatever it is I'm trying to understand. Not an end in itself. And I worry at times that people who were trained in computer science, they come up with the computer being the end result. And often, I have encountered, sadly, people in very prominent positions that don't really understand how people use the computer. And that's sad. To me, no software that I've developed, no hardware that I've developed, has any other purpose than to help me and others understand. And if it doesn't do that, it really isn't worth my time and effort. So talk a little bit about this example that you shared with the audience today. This is a fascinating piece here. We developed, we started as a advanced research program at Sandia over a decade ago. Essentially, we were using, at that time, the catchphrase was virtual reality. It was immersive environments. And we had the idea that, I don't like the term virtual reality, but that this stuff puts you close to information. It wasn't like we were trying to create a reproduction of, say, this building. We were putting you in places where you can't go in real life. And if we do this in a way that plays to the human function, the way humans actually interact with their space, that learning would increase. And we were successful there. We were so successful that it spun out from Sandia as a private company, then went public. And the successes across the board, what we did was develop a system which, when you interact with your computer there, you're bending your will to it. You interact the way it tells you to interact. You use a keyboard. Even mouse clicks and stuff like that, that's not the way you do things, as I said in my talk. Imagine if you had to drive a car the way you operate your computer. That way you put some virtual buttons, let's say, that came down three-dimensionally, and you would press them, and you wouldn't drive the car. But you will run your computer that way. And what people have lost sight of, and one of the things I mentioned is a study done by IBM some years ago. They were interested in productivity. And so they did a test where they had a CAD program, a design program. They would give people a specific thing to do. What they didn't know was between the time you said, do this, hit the key, and the time the answer came back, they had put on a little knob that varied. So they could put a delay in. They wanted to see how that delay just between clicking the key and the result, how did that affect your productivity? Well, they started lowering the time, lowering the time. They got down to a second, and productivity was shooting up. OK, one second response, then they lowered it. Half a second, productivity shot up. Three-tenths of a second, productivity shot up. My god, the faster we go, what happens when the computer is truly responding to you as fast as you can ask questions, you get answers? Your whole way of working changes. It's like a video game. You become engrossed. So question for you. Obviously, the personal computer revolution had a lot to put in place to put these static or glass ceilings, if you will, relative to the design. But with cloud and mobility, Eric Schmidt has been talking about Google design for mobile. So as an opportunity for young smart guys in Berkeley or wherever to design the next gen product, what would you advise them? What would you share with them and say, OK, as mobile, which is an opportunity to change the game a bit? Because now you have form factor changes. It's potentially a new car, where the PCs, the horse, and buggy mobile could be the car, if it's stretched. But if there's an opportunity to influence a generation, what would you say? I mean, throw it away, redevelop, build the platform? What I would say is, based on actual real world experience that we've had, and this may be hard for you to believe, if you provide a human interface to that data, now I'm going to be a little demanding. I'm going to tell you, you need between the time my query and the time the result comes back, I better be less than a second. And the way I interact with it is not pushing buttons and stuff. Again, think about driving a car. Think about what you do. You don't look at your hands. You don't look where buttons are. And yet, when you're driving a car, I mean, you're taking in, you don't think you are. You're taking in motor noise. You say, well, I don't hear the motor noise. You let the motor make a ping and see if you don't hear it. Vibration from the road. You say, I don't feel the vibration? Yes, you do. Your mind processes the roughness of the road. It processes. And you can be talking to someone, and you can have the radio on. You're doing all of these things in real time, and you're not breaking a sweat. Make the computer respond like that. If you make the computer respond like that to these large data sets, if you allow people to ask questions, and that's really the miraculous thing. No one knows what's in the data sets. No one. Even people that think they do, I will guarantee you there are surprises in them. Is there an analogy in your mind to chemistry with content? I mean, data is, ultimately, it's information. There's different elements of data, different meanings, different databases. In a way, it's almost like a chemistry, or it's physics and chemistry kind of blending together. I mean, because if you want to have a low latency response like that, you've got to have a new way to interface with the data at a root level. Absolutely, you do. And do something different. So I guess that's kind of a mind-blowing position to kind of get these young computer scientists. But it's gotten easier. When I started this work, I mean, virtual reality at the time was a big thing. I didn't, and still don't like the name virtual reality. But people developed interfaces. But they weren't trying to use them for data. They were trying to use them for Hollywood and various other things. But we've come a long way. I mean, you can buy a stereo TV now. Very inexpensive, a stereo TV. So I can show you your data in three dimensions on your own stereo TV. The advent of many of the game systems, which will recognize hand motions and things like that. Now, that can be used as a gimmick. All of the stuff can be used. But it can also be used to help you. You want to turn a data set around? Just do that. What a bit is to a bite, you know, you think about large data sets in that kind of reference because you have to act on a whole new processor, a whole new operating environment has to kind of be created. I mean, is it a recreation or is it a? Well, it's, historically, in business, if you were a computer company and someone, two people came to you with a plan for something new. And one of them was to enhance the user interface. And the other one was make the processor 10% faster. Would you like to bet which one would get the money? And the reason is because having the processor go 10% faster is something that can go up on a sign. You can use it in advertising. Yeah, it's a gimmick. The fact that I made a user 50% faster, that's much more difficult to quantify. And they didn't put the money there because they didn't think it would bring returns. But we are reaching the point now, where at the whole conference, you're drowning in data. And I will tell you from firsthand real-world problems, we have accelerated people's comprehension and understanding of data. Three orders of magnitude, 1,000 times. Can you give us an example? Is that good or bad in your mind? Oh, that's fantastic. You ought to see. Three orders of magnitude. Three orders of magnitude. It's Moore's law for the brain. I'll give you an example. I'll give you two that sort of illustrate it. In the first case, we had a company that will remain nameless, one of the largest chip manufacturers in the country that had prototyped a new chip, and they had five different programs that run analysis on it, vibrational heat, electrical, and they were trying to figure out. They had screens that they could bring these data up on. We fused it, turned it into a chip, allowed you to fly around it. The engineer in charge found that there was a flaw in the design in 15 minutes and corrected it. Didn't know it was there. 15 minutes. They'd had it for four months. That's not the story. The story is, as an item of curiosity, we actually queried people in the company. Who knows the least about electricity? One gentleman volunteered his wife and said she still blows the circuits out in the house. So we actually brought her in. We put her down in the same model. We did not talk about volts, or amps, or circuits, or ASICs, or anything. There was color flowing over this. There was sound and she could fly and she could touch things and things would happen. She really didn't know what she was doing. But you know what? Something went red. Sounds went off. What the hell? What's going on? What happened? We said, well, I don't know. Figured out. She went over it. Look, if there's something going on here. She found the problem. And she suggested a solution to that problem. Now the only difference between her and the EE is it took her 30 minutes instead of 15. That's fascinating. So the human mind is its own processor. So what you're getting at is that the current state of what computers have been designed for is like a horse and buggy. And that there's yet that. And people get funded based upon certain standards. But a new standard kind of needs to evolve. I'll give you one more example in a different area. This one I can't mention, because enough time's passed. Roger Pinsky and Goodyear Tire and Rubber. Pinsky was running race cars, obviously. And he was losing races. Not by a lot, by fractions of a second. And but it was continuous. This was bad. Why are we losing races? I couldn't figure it out. So they instrumented the car. They put telemetry on it that would, NASA would put to shame. They broadcast it on five different tracks, full races, brought all the data back, set a team of people down. OK, why are we losing races? Two years later, they hadn't a clue. Now they were using the same sorts of interfaces, frankly that you see here. They spread graphs, comparative graphs of all the stuff sliced this way. But they didn't know what they were looking for. They just knew they were losing races. Somewhere in that information was the answer to why they were losing races. Well, Pinsky, after spending several million dollars, said, OK, we're getting nowhere. I'm pulling the plug. And as a last resort, they came to us. And it took us about two months to build a model with all that data in it, all of it, simultaneous, 20 dimensions. There wasn't a number showing. There was no graph showing, none of that. Wheels on the car would morph in size as the pressure changed. You'd think it's cartoonish, but everything that was happening was exaggerated so that as you drove the car, you could see it. You could experience it. Five minutes, they found the answer. Two years, nothing. But the computer didn't find it. The human mind found it. I just, they just had the data in front of them. That's been a big theme in this conference, is the human aspect of data curation. So in the linguistics world, you'd have to have some knowledge around ontologies, which has been a field in AI and academic, where machines can do something. But without human interaction, this data stuff doesn't work, because there is an element of humanness that needs to interface with the machine.