 Okay, we're back live here in Silicon Valley, California at Broke Cage Technology and Analyst Day. This is SiliconANGLE.tv, SiliconANGLE.tv, theCUBE, our flagship program to go out to the events and talk to the brightest minds we could find, entrepreneurs, CEOs, executives, anyone who's got the signal from the noise, we want to extract that. We have Nagy Helene, who's here with IBM, IBM Research Fellows, it's 1980, going way back in the old Tom Watson generation, early in IBM now, obviously IBM's been transformed. I'm joined by co-host Stu Miniman from wikibon.org, top analyst in the networking space, writing the best real-time market research. Nignagi, so tell us about your current experiences with IBM right now, because IBM is really leading the charge right now on many fronts, certainly the IBM transformation of the past two decades has been interesting to watch, certainly amazing turnaround story, and then growth. And then now you've got huge press props, industry props for the work we're seeing with the Watson computer, a lot of big data, a lot of social commerce, a lot of really great stuff going on in cloud, obviously, within IBM, but in particular the big data effort has been pretty smashing success for IBM. So give us an update of what's going on right now, IBM in your world, around big data. You've got social, mobile, and cloud kind of exploding out of the scene, changing the tech landscape, certainly under the hood, powering the iPhone 5, bring your own device to work, consumerization of IT, all this stuff's happening, and literally under the hood is a complete re-architecture. I think that pretty accurately captures what's happening. I would say that to make headway though on a topic like this, you need to have a conceptual model of what's going on, right? So what I like to do is to think about big data, not from the data perspective, but rather from the perspective of what it is people are creating. So people are creating very complex systems. They're creating immensely complicated telco networks, transportation networks, and in fact, these systems that we create out of people to do social interaction are a different kind of system. So many, many complex systems. And when you put a system like that together, the objective is to accomplish some larger purpose, to deliver some good service, to do foreign exchange, to do capital markets, so some purpose that you have. And those purposes are always very important, sometimes absolutely critical. And when those systems operate, what's really happening is that they throw off a lot of data, and that data can now be used to understand what the system is doing. So that's the perspective that I've kind of developed on this. So you have both man-made systems. You have people themselves that are complicated entities, right? Human beings are very complex themselves. So we're talking about medical kinds of applications, but then also natural systems. In a way, the schema of IT is changing to all kinds of different schema. And that brings up the whole notion that we're in a connected world, right? So as you said, this construct is IT and infrastructure, providing all kinds of packets moving around. But now you've got a consumer that's connected to the network with a device. And that's providing data. And so the advent of NoSQL databases, virtualization provides a new paradigm for developers. And obviously with Solid State, you now have an unlimited memory. So this canvas that's going on right now from a creativity standpoint is pretty amazing. So share with your insights around what you're seeing around this new transformation to this modern era, where everything's changing. Programmability, things like just addressable memory. Spinning disk is going away to Solid State. Database technology is changing. It's amazing to kind of, it's hard to really get your arms around. Right, so all of the things that you're talking about though, still are about creating capability and facilities to make life better, right? So if it's dissemination of information to individuals through personal devices, or if it's in a hospital setting where you're monitoring patients very closely, or if it's sort of mapping or any of these other things. These are systems that provide some kind of product, some kind of benefit for individuals, right? And now those systems are being built out of all these new technology things that you're talking about, right? You build them out of Solid State instead of disk. You build them out of high speed networks instead of old clumsy networks, etc. But it's all to advance the provision of services to people, right? We're providing better healthcare. We're providing better communication. We're providing better social awareness. So we're building things to do that. And so of course I'm very familiar with and work with all of the underlying technologies. But from a big data perspective, what I'm really focused on is each of the different industries that's building systems has some important question they want to ask or answer, have the answer. So if I'm building a network, I want to know how that network's being used. So it'd be safe to say that one of the benefits of big data is, two main benefits of big data is one, answering existing questions faster. Right. And two, answering questions you never could answer before. That's right. Would that be so I would say faster but also more accurately and more automatically, right? So as the systems get more complex that we're building, they have more scope. They do more things. They're more automated. And there's also the risk that those systems won't do exactly what you want. Because they're difficult to control. People are good at building the systems in the first place. Once they're built, you don't necessarily know exactly what's going on in the covers. Because things are happening so quickly and on such a scale. And so big data now can provide instrumentation essentially. Think about it this way. A new kind of instrument for looking into your systems and figuring out where the inefficiencies are, where the fraud might be taking place, where the system is not operating as designed, where some emergent behavior may be coming out. So for example, where the system may be deciding too much traffic is going through one place. So maybe it's too much of a single point of failure. Many things can happen, right? So what's fascinating about it for me is that each different industry has different kinds of questions. If you're an agriculturalist, you may want to know how your seed is going to perform. If you're a transportation guy, you may know where your pinch point is in a transport network, right? If you're distributing electrical power, you may want to know where you have power theft or loss of power or where you have equipment that may be failing because it's overused. I love that term instrumentation because you think about it. It's like a network management problem. I want to instrument my packets on the network. But in reality, life with big data, as you mentioned, is one where you need a lot of policy, another network concept. But in a way, the networks we're living in are distributed networks, right? So every piece of data, whether it's a human being involved, is a distributed network. So in a way, network principles apply now to the data. So how do we enable this environment where dynamic policy, because essentially what you're saying is every industry, you can't project syntax onto one industry because it's different to another. And so you have to have some flexibility and agileness around systems. I've got to be adaptive and I've got to be able to be flexible so that one industry can ask one question and be just as powerful as the next industry, which has a completely different semantic question. Right, they absolutely do. And so what's really needed, first of all, is for us to explore very closely with partnerships what that particular industry is all about. And so we have an interesting specialization in modern society where you have specialists in the field, guys who know all about oil exploration, guys who know all about medicine, guys who know all about networks, right? They may not understand analytics. They may not understand how to exploit the data that's thrown off by analytic systems. And yet IBM cannot have expertise in every single domain. So we need to closely partner with domain experts to understand the things that you're talking about. And also technology you guys are developing around machine learning is really cool. So I want to ask you a question along the lines of how you get intelligent systems to be more intelligent. So an area that I've been thinking about for quite some time, because we're in the content business, which is essentially internet infrastructure at this point, is this notion of meta reasoning. So reasoning is an AI-type concept. When you've got things now in the industry, like machine learning and AI is kind of becoming very practical. So can you share just the IBM insight around what's going on on some of the research around, I don't want to say classic AI, more like reasoning, like meta reasoning is like making sense of keywords and interaction data. I mean, essentially that's what we're kind of talking about here, ontologies. I'm glad you're raising that, because I think there's a big focus on the underlying technologies, the NoSQL, and et cetera. And really the way to think about the solutions for these problems is they require these very advanced platforms, essentially kind of the operating system that runs the applications. Then there's the application layer. And that application is not only composed of things along the lines of what you're talking about, machine learning modules, or reasoning modules, or classification modules, et cetera. There's very advanced analytics, but there's also an application structure that has to be created. It's not just the analytic kernels, it may also be the combination of those kernels into end-to-end applications that are very smart and very flexible and adaptive. And some of this requires technology that's actually quite advanced, for example, planning technology that can understand a little bit about the problem you're solving and potentially reconfigure the applications to better address or attack that problem. What's some of the cutting edge, where are we in this? Because that's really kind of, I'm seeing that as an important part of kind of these future ads. Especially as things have to be retrofitted from mobile and different kind of environments, versus the full on big bloated application module in the past. So where are we? We're in the bottom of the first inning, is it? Can you give us some sort of benchmark in your opinion? Like, I mean, are we in the middle innings, or are we like? You know. Let me give you sort of a sliding answer in the sense that we're pretty good at solving the problems that people want to sort of take on right now. And, but the point would be that there are problems way beyond the ambition of people that we can't yet solve. So what I'm saying is there's kind of a progression or a ladder from where the problems are today that are practical. So today, for example, we may want to look at transaction fraud in financial services. We can do a pretty good job with that. But maybe we're not ready yet to do facial recognition in an airport. That may be beyond what we're able to do. Or real time speech analysis, right? Where are we with computing? We were just at Intel developer forum yesterday getting kind of down and dirty in some of the weeds around some of the tech around, see compute, storage, networking, all the stuff's going on and silicon's being advanced. So it's not just silicon anymore, it's solutions, right? So you're seeing more integration Intel moving from an element to now more solution specific. Computing and virtual machines give us more capability. Where are we with that in your opinion in terms of advancement? And we've got a lot more work to do. I mean, with things like crunching numbers and simulations, now we're doing stuff in the cloud. Where are we relative to the progression in that? I feel that we still have a ways to go in terms of making significant breakthroughs and getting away from the fairly classic Von Neumann programming model. A lot of it's handcrafted work, right? And so we're starting to investigate things for the automation of multi-core exploitation, things of that sort. And we're also looking at reprogrammable logic to improve application performance. So lots of things there. But I would say that we've hit some, I would say, stall points. And I'm still looking for some breakthroughs to happen. So Stu's giving me the, I got to get a question. Stu, I'm sorry for ignoring you. It's all right, John. I know big data is one of your passions here. So two questions. One is, we're here at Procade. And I was reading about some of your, it's the InfoSphere Streams. If you could talk about the brokerage relationship and then the second piece is, you're in research. And IBM's done quite a lot of acquisitions in the big data space. Matter of fact, Wikibon's based in Marlboro. Natesa's right around the corner from us. Next folks, you work with, Natesa, you work with Cognos. If you could talk a little bit about what you see inside of kind of the research versus the M&A and how those interact. Two very different topics. So I wondered if you could, before we went out of time. Procade and InfoSphere and how you guys partnered together. So that one's pretty simple. So InfoSphere Streams is all about this live data acquisition and processing platform. And we're talking about extremely high speed analytics that can take in millions of messages per second and actually do on the fly analysis for anything from financial markets trading to cybersecurity applications to quality of service investigations on large networks. So very broad range of things that we're able to do that. So we partnered with Procade because Procade has the very well known MLXC device, it's a telemetry device that sits on a telecommunication network and provides some first level handling of the data. So think of it as creating a sensor that sits on the network that allows streams behind that device to process the data for higher level analytics. Okay, so that's a very nice partnership. And furthermore, we can actually diagnose situations in streams with very elaborate applications and analytics and then give feedback to the device so the MLXC can actually look at different parts of the network depending on what we're seeing. And then the second piece is kind of organic research versus M&A. Right, well the research division is a very, very significant body. Not only do we have the headquarters location in New York but we have labs around the world. In Brazil, in Dublin, in China, in Israel, et cetera, but there are only so many topics you can actually work on, right? You have a finite population. And as we've discussed here, the problems that we have are very wide ranging in terms of all the things you can do. So we have a set of projects but then we supplement our activities there with targeted acquisitions that fill in gaps in certain areas, very simple statement. Do you find, do those groups then move into research or are there folks that are joining your teams? Well, typically when we do an acquisition, we earmark some funds for them to work with research. The research division then has a shot at partnering with that new acquisition to enhance or elaborate their technology, broaden it out, pull it into other activities, et cetera. One of the advantages of having been at research a long time for me is that I know a lot of people throughout the division what the projects are that might bear on a particular acquisition. It's a very interesting time with acquisitions coming in. Great. Okay, we are here live inside theCUBE. Live coverage at Brocade Tech Day. I'm John Furrier. We'll be right back with our next guest after this short break and bring you all the action from all the innovation here at Brocade. Thanks for watching. We'll be right back.