 Live from Las Vegas, it's theCUBE. Covering Edge 2016, brought to you by IBM. Now, here are your hosts, Dave Vellante and Stu Miniman. We're back, this is theCUBE, the worldwide leader in live tech coverage. Jamie Thomas is here, she's the general manager of strategy and development at the IBM Systems Group. Jamie, good to see you again, welcome. It's good to see you guys again. So your role continues to evolve. We saw you, well, first met you long a time ago when you were Tivoli, I don't know if you remember me, but then we met, we met when you were running the storage division and now you're running strategy for Rosamilia and also expanding role into R&D as well and logistics, supply chain, you got it all going. So give us the update. Yeah, well, it's a great opportunity for me and I'll tell you, we have a great team. You know, my roots in IBM are really research and development, so I'm kind of going back to my roots to some degree, but I'm really managing a team that's responsible for the next generation of our systems architecture. So these are the rocket scientists that are designing our chips, designing those next generation power mainframe systems, as well as the teams that are producing all these Linux innovations. And of course we not only create the technology, but we have to manufacture and deliver it. So that's where the manufacturing and supply chain teams come into play and they play a critical role in us achieving our business objectives every day. So let's say that IBM's really good at sort of, when it figures it out, then communicating to the world the fundamentals of what's going on. Moore's Law, I mean, you're not seeing clock speeds accelerate anymore. You're seeing all kinds of open, we heard today Hadoop on Z, right? You guys are obviously, you see things like containers and the like, Linux on Z for a long, long time. So you see all these trends boil it down for us. Like what's happening in the world that drives your thinking in terms of R&D? Well clearly I think you've touched upon one of the more fundamental principles and that's about this open collaboration. We, IBM have known that open collaboration was here to stay for a long time because it drives the ability to innovate with a set of business partners. It allows us to change business models effectively if we do things the right way. And of course it's all really about innovation at the heart of this. So our teams are very focused on the next wave of open collaboration and you've heard about some of that today, open power, Linux and blockchain are fundamental parts of this journey. If you look at what we've done with open power, I think that open powers enabled us to deliver these new systems that we've announced in the last few weeks, these Linux class systems, working with our partners like NVIDIA and Supermicro to deliver these systems. This open innovation and how we've developed these systems have allowed us to produce systems that are commercially viable for these modern workloads that many of our customers are embarked upon. So part of the impetus obviously is you can't, one company can't do it all. You just don't have unlimited R and D budget. But it's also, there's more to it than that. There's speed of innovation, there's pace, there's what Jeff Jonas would call your observation space, much, much wider with open collaboration. Can you discuss that a little bit? Well if we just look at the open power systems as an example, the innovation that we did with NVIDIA's Tesla GPU married with our Power CPU, our Power 8 CPU, is an example of that innovation where we were able to use the architecture that we were embarked upon to deliver something faster to the marketplace. If we look at what we're doing around Linux, it's really about being able to serve a new age of workload, be able to support things like blockchain effectively because the ecosystem is going to be able to understand this new workload in the context of Linux, it's where they have their development skills, it allows us to move faster, it allows us to have clients participate more effectively with us as part of that journey. So the old model is okay, I'm going to develop something and I'm going to keep it to myself and not share it and then, you know, sort of Wikinomics came along and it's like, share everything and. Sharing too much perhaps. Yeah, well right, that's right. So there are unintended consequences I guess. What are some of the things that have surprised you about open power positive, negative and different? I think that one of the things that's really that I'm impressed with is our team's ability to embrace Agile and design principles as part of this journey. And what I mean by that, when we talk about things like DevOps and Agile, we always understand that that really does apply to software, it's easy to understand how it applies to software, but how does it apply to hardware? I think that by opening the system up, by adopting new ways to develop, which we have done, that's allowed us to move at a different level of speed, but with quality and it really is Agile at scale for hardware. So if I look at some of the things our team has done, we're employing advanced EDA simulation as part of how we develop our next generation chips. It's really cool stuff. We have our own supercomputer in terms of how we're designing these chips because now we work with a partner called Global Foundries. We don't own our manufacturing, I mean our fabrication for the chips. We have a partner and to work with that partner, it's actually allowed us, where it's forced us to improve our automation, but it allows us to adopt industry principles in terms of how we design those next generation chips for power and for the mainframe. And I think that is very important for our business and it's allowing us to get benefits as well. Jamie, when IBM divested itself of the X86, moved it over to Lenovo, many people thought, oh well, IBM's kind of given up and saying we're not going to go there. You talked about supercomputers, it's really interesting in this space, you see China creating their own chips. We see Oracle and IBM, both have very high performing solutions that don't use the Intel chips. Why is it so prevalent that there's a lot of innovation going on in kind of the microprocessors these days? Well, I think there's a number of reasons. If you think about innovation at that level, it's not just about supercomputers, of course, but it's about commercial application of a lot of these techniques. And as we talk about big data and the need to get insight from that data with analytics, there's a lot of commercial HPC that really crosses many, many industries. And I think that's an interest to us. When we talk about the supercomputer area though, from a perspective of the United States, of course, where we are today, our ability to innovate and provide capability that we have to our national laboratories as part of this delivery is very, very important. In fact, I just saw the pictures of the machines being delivered on their way to the national laboratories today. And that's a critical part of what we bring to the table as a vendor. But once again, the learning from that will be applicable to commercial clients across industries, whether that's industrial or financial services or healthcare, they'll all be able to take advantage of that. Can you also impact for us power in the cloud? How does that impact customers? Why is that preferable to just using kind of commodity x86? Well, I believe that power in the cloud, first of all, the cloud model we know is more, in some respects, more about economics and a different way to purchase capability to start with. But power in the cloud allows clients to take advantage of the characteristics of the power machine that you would on-premise, obviously. The ability to have advanced analytics processing, the ability to have more data throughput than you would see with a traditional Intel environment. And that's very important in a lot of these different applications today. So our interest in being able to support on-premise and cloud models is to allow our clients to have that purchasing flexibility that many of them desire to have is in the way they purchase infrastructure. Tell us more about Global Founders. Is that what came out of the Microelectronics deal? Yeah, so Global Founders is our partner that we now have for chip fabrication came out of the Microelectronics deal. And so once again, it's an example of opening up how we do our process with the chips. We've had to then, we're using industry standards in terms of how the chips are manufactured. It's our intellectual property, our unique design, but it's industry standards in terms of manufacturing. And it then forces us to do things differently, but these things, to me, we have great outcomes so far. So what does that do for your business? What are the outcomes? Well, I think this allows us to have an ecosystem that ensures that we have broader scale around our platforms in terms of the supply chain. That's very important. The innovation that I talked about, the speed obviously is allowing us to be able to fine tune our processors, make changes more readily throughout the process. This is a process that is fairly scientific and you want to get it right in the end. So the degree that we're able to put these principles into the system, that means we're able to still make changes, but have the reliability that we must have when we ship these products. So you have strategy in your title, when we can talk strategy for a little bit. What are the fundamentals of the strategy? Clearly IBM's chasing value, not volume. What else can you tell us about the strategy? What are the foundations of your strategy? Well, if you look at the foundational elements for our strategy, we've talked about one of them, which is this open collaboration. That's kind of the ground layer, if you will. That's the basement, the foundation that we're working upon. Then we're building systems that allow us to go after the cognitive marketplace. And these systems involve systems that can deal with analytics more effectively, both at the compute and the storage layer. So I know you've talked to a lot of storage folks today, and they probably talked to you about software-defined storage and flash, and many of the innovations that are applicable there. And then third is really about making sure that our systems are designed for cloud environments. And then we can support cloud, whether that's an on-premise cloud or an off-premise cloud. Many of our clients do a combination of both. And many of our clients, frankly, they will tell you that where they are today is not where they will be necessarily in one to two years. So ultimately, they want flexibility and how they manage their infrastructure and where they place it. Okay, so we talked about how that open collaboration affects your R&D piece. How about cognitive in cloud? Are those things, they are, obviously, but how are they directly changing how you conduct R&D? And how is that different than what would have been 10 years ago? Well, it definitely affects our thought process in terms of who we partner with. The kind of partnerships that we choose to do are going to be rooted in the kind of workload that we think we need to support. Many of the principles around what we need to do for advanced deep learning, which a learning machine is very different than a programmatic machine, if you will. Those ideas and design principles have to be rooted in how we do the chips from day one so that then that's carried forward into the ultimate delivery. So it's fundamentally important all the way throughout the process. So one of the things we hear from practitioners, they're struggling a little bit, is they've got their own data centers, they've got cloud services, they've got data, lots of different places, and how are they getting the most out of that data no matter where it lives? How does IBM look at that? How do you help solve that problem? Well, I think we are operating upon a principle that says we've got to increasingly think about moving the compute to the data and not moving the data to the compute because there's just simply too much data out there. So if you look at a lot of what we've done in terms of our compute design, in terms of particularly, if you look at software-defined storage, it's really rooted in that principle that we're going to enable you to take advantage of the data wherever it may live. You create a data ocean, if you will, out of the data assets you need to access, whether those assets are on different storage media, whether they're on disk, whether they're on tape, whether they're in object storage, or whether they're on a different form factor that you've had historically, but the key thing is being able to, Hadoop is another one we talked about this week, right? Being able to lasso the data that's in Hadoop. But it's about being able to take advantage of the data where it exists and move the compute to the data. If you look at a lot of the mainframe environments, that's really important too, because if you look at what we've done with Spark on the mainframe, that's a key part of that strategy. It's about being able to take advantage of Spark, but in the context of where the data lives. Because if you look at the network cost of moving that data around, it's astronomical, and it's very time consuming. Right, you've got speed of light problems. You haven't, you've got your best people working on that, I'm sure, but I still haven't solved the speed of light. But IoT brings up, I read a stat the other day on Wikibon, David Floyer published a prediction that said that 95% of the IoT edge data will stay there, and analytics will be done there, and then some small portion, maybe 5% will be sent back somewhere out of the data warehouse or whatever. You agree with that? I absolutely agree with it, and I do agree with the speaker this morning, the MIT speaker that spoke about the fact that IoT is going to be a keen driver of this. If you think about the instrumented automobile and the supply chain associated with that or the ecosystem associated with that, there's all kinds of different touchpoints where the data will live, and there's just a huge amount of data that will exist as part of that ecosystem. You're not going to necessarily move that data around, but you need to have the compute be able to access that data appropriately. So coming back to R&D fundamentals, you're moving five megabytes of code to a petabyte of data. What does that mean from an R&D perspective? Well, once again, where we really focus a lot of that design point in particular is around the storage design point and how we design the spectrum family, if you will. Being able to create a system that allows us to reach the data wherever it exists has been a fundamental principle of what we've been doing. The team announced this week this copy data management, which is all about managing copies of the data more intelligently, being able to pull the data back based on some sort of policy that's really, really important in that environment. So that's a key aspect, I think. It flashes, the other thing that changes a lot, it's sort of, the bottleneck used to be the spinning disk. Now it's the network, right? Yes, it is. So talk about how that's changed your thinking. Well, I think that the network can be a solution to the problem or it can be an inhibitor to some of these new environments, right? We see that every day, so I think that we believe that in-memory systems, certainly, in a number of these things, like even what we did with NVIDIA, which is really all about eliminating that network bandwidth issue, right? Being able to have a lot more speed in terms of the data processing aspect of how we do the GPU to CPU communication is really a guiding principle as part of that design. So anywhere we can eliminate network bandwidth, whether it's in that layer or whether it's in just the layer that we talked about where we're having to move data around, I think is really critical in both computing and storage design. And it depends on what the customer's doing. I mean, some customers, it depends on the workload. And what I do believe is that there is still workload-centric architecture, that not every architecture is going to be, there's not one architecture that's going to serve the need of all workloads. That there is workload optimization, and that workload optimization drove us from transactional systems to client server to web systems to mobile systems and there will continue to be workload optimization. Horses for courses. Yes, I don't see it changing, it's kind of been going on for decades and we're just into the next evolution of it. The next killer applications are here and they're typically more analytics-based applications. And they'll drive new architectural requirements and there'll be something else that you'll have to react to and respond to. Yeah, exactly. Good, well, I'll give you the last word, we got to close, but on edge, you've now sort of come over from Tivoli, just brought a software background into what is just, that'll be predominantly a hardware group, so that's good. More software, the better, I always say. Software is eating the world. Well, I just wanted to leave you with one thought that I didn't mention, I wanted to give my supply chain a team a shout out and one of the key topics here at this conference is about blockchain. And so I just wanted to say that we are a consumer of blockchain because if you think of it, our supply chain team manages a very large supply chain for IBM, we have 17,000 suppliers. We're managing billions of dollars of business through the supply chain on behalf of IBM, and we are going to be a consumer of blockchain. And so what we've done is we're working with the Singapore customs agency in particular to really crack the code on customs documentation and customs documents as part of blockchain and shipping. Because you think about customs documentation or customs declaration documents, they have a variety of information whether it's financial pricing, product information, manufacturing logistics. This is just ripe for blockchain. And so what we want to be able to do is surface that as part of our supply chain so that we are a customer of IBM's blockchain capabilities as well as IBM's supply chain organization. And the philosophy there is to have a connection between partners, buyer and seller potentially in that example, without having to have some kind of third party mediator involved, right? Well, yeah, if you think about the software ledger definition of blockchain for financial services, well the same can apply in this kind of environment because customs declarations are part of a very complex ecosystem where a lot of the work is outsourced, a lot of it's manual, there's a lot of issues in terms of meeting compliance. And so this basically, it not only fixes some of the problems that you see in financial services, but it frankly automates something today which is not even really that automated in many cases. So in the past in IBM, I had the fortunate opportunity to work with a lot of ports with some of the things that we did intimately. And I worked with ports in like Cartagena and saw what they had to deal with in terms of being the last leg before the Panama Canal. And it is a really difficult task to get those ships loaded to deal with your refrigeration units, to deal with all the customs documentation that you really have to execute as part of something like that. And so when I think about what we can do is being a test case for blockchain, for our supply chain, and work with someone like Singapore, which is one of the world's premier shipping locations, it's gonna be really interesting. Our first viable product, if you will, our first prototype, we think will go live in October and we're gonna use that to ship our mainframe parts from Poughkeepsie to Singapore as part of our own manufacturing execution. And what's gonna happen? It's gonna eliminate a lot of paper shuffling or electronic paper shuffling? Yes, and also the interaction that, yeah, provenance and also the interaction that we have responsibility for in terms of anything around import, export controls and how we deal with managing this entire supply chain. I mean, a lot of this today is just inherently not as automated as it could be. Provenance is, as you can imagine, its supply chains and for hardware vendors, provenance is really a top of mind topic as well, because if something should go wrong with a part, then we absolutely must know and be able to map that out as part of the supply chain in terms of exactly when that problem occurred, over which period of time, which customers were affected. And we have a lot of that data today through analytics, but then you think about providing the insight that we have in this broader context of customs. You know, you're participating with a lot of other players, but there's many applications within supply chain is what I'm saying. Customs declarations is just one, provenance is another. So I think supply chain will be right after financial services in terms of blockchain and what we can do. Fascinating topic, blockchain, you know, for years it was like, oh, Bitcoin currency, but there was so much more from a technology application perspective and IBM's taking a lead there and again, with a collaborative model, you guys are one of the first to really push this into the enterprise. So that's great, Jamie. That was a good last word. Well, I just forgot. I didn't mention what I wanted to share about my supply chain team and they may crush me later. So I want to be on their good side. And so, cool. So we'll give you the last last word on edge. Thoughts on what you've seen so far? Well, I think the first day has been great. I think it was a good discussion today about innovation and some of the aspects of innovation. Great examples about leading edge research from a genomics, from a lot of the healthcare industry, folks that were here. We talked about many of the announcements and I'm just looking forward for a productive week with the rest of the client meetings I've got. Excellent. Well, thanks again. It's always a pleasure to see you. Great to see you guys. All right. Keep right there, buddy. Stu and I will be back with our next guest. This is Cube, we're live from IBM Edge. Right back.