 Live from Las Vegas, it's theCUBE. Covering InterConnect 2017, brought to you by IBM. Okay, welcome back everyone. We are live in Las Vegas for IBM InterConnect 2017. This is theCUBE coverage here in Las Vegas for IBM's cloud and data shows. It turns out, I'm John Furrier with my co-host, Dave Vellante, our next guest is Jamie Thomas, general manager of systems development and strategy at IBM, CUBE alum. Great to see you, welcome back. Thank you, great to see you guys as usual. So, huge crowds here. This is, I think, the biggest show I've been to for IBM. It's got the lines around the corner, just a ton of traffic online, great event. But it's the cloud show, but it's a little bit different. What's the twist here today at InterConnect? Well, if you saw the keynote, I think we've definitely demonstrated that while we're focused on differentiating experience on the cloud through cloud-native services, we're also interested in bridging existing clients' IT investments into that environment. So, supporting hybrid cloud scenarios, understanding how we can provide connective fabric solutions, if you will, to enable clients to run mobile applications on the cloud and take advantage of the investments they've made in the existing transactional infrastructure over a period of time. And so the keynote really featured that combination of capabilities and what we're doing to bring those solution areas to clients and allow them to be productive. And the hybrid cloud is front and center, obviously IoT on the data side, you're seeing a lot of traction there, AI and machine learning, kind of powering and lifting this up. It's a systems world now, and this is the area that you're in because you have the component pieces, the composability of that. How are you guys facilitating the hybrid cloud journey for customers? Because now it's not just all that here it is, I might have a little bit of this and a little bit of that, but we have this componentization or composability that app developers are consistent with, yet the enterprises want that workload flexibility. What do you guys do to facilitate that? Well, we absolutely believe that infrastructure innovation is critical on this hybrid cloud journey, and we're really focused on three main areas when we think about that innovation. So integration, security, and supportive cognitive workloads. When we look at things like integration, we're focused on developers as key stakeholders. We have to support the open communities and frameworks that they're leveraging. We have to support APIs that allow them to tap into our infrastructure and those investments once again. And we also have to ensure that data and workload can be flexibly moved around in the future because these will allow better characteristics for developers in terms of how they're designing their applications as they move forward with this journey. And the insider threat though is a big thing too. Security is not only table stakes, it's like a highly sensitive area. It's a given, and as you said, it's not just about protecting from the outside threats, it's about protecting from internal threats, even from those who may have privileged access to the systems. So that's why with our systems infrastructure, we have protected from the chip all the way through the levels of hardware into the software layer. You heard us talk about some of that today with the shipment of secure service containers that allow us to support the system both at install time and run time and support the applications and the data appropriately. These systems that run blockchain, our high security blockchain services, Linux one, we have the highest certification in the industry, EAL five plus, and we're supporting FIPS 120-2 level four cryptology. So it's about protecting at all layers of the system because our perspective is there's not a traditional barrier. Data is the new perimeter of security. So you've got to protect the data at rest in motion and across the life cycle of the data. Let's go back to integration for a second. Give us an example of some of the integrations that you're doing that are high profile. Well, one of the key integrations is that a lot of clients are creating new mobile applications. They're tapping back into the transactions that reside in the mainframe environment. So we've invested in ZOS Connect and this API set of capabilities to allow clients to do that is very prevalent in many different industries, whether it's retail banking, the retail sector. We have a lot of examples of that. It's allowing them to create new services as well. So it's not just about extending the system but being able to create entirely new solutions in the areas of credit card services is a good example. Some of the organizations are doing that. And it allows for developer productivity. And then on the security side, where does encryption fit? You mentioned you're doing some stuff at the chip level, end-to-end encryption. Yeah, it really, it's at all levels, right? From the chip level through the firmware levels. Also, we've added encryption capability to ensure that data is encrypted at rest as well as in motion. And we've done that in a way that encrypts these data sets that are heavily used in the mainframe environment as an example without impinging on developer productivity. So that's another key aspect of how we look at this. How can we provide this data protection? But once again, not slow down the velocity of the developers, because if we slow down the velocity of the developers, they will be an inhibitor to achieving the end goal. How important is the ecosystem on that point? Because you have security, again, end-to-end, you guys have that fully protecting the data as it moves around, so it's not just in storage, it's everywhere, moving around in flight, as they say. But now you've got ecosystem parties, because you've got API economy, you're dealing with no perimeter, but now also you have relationships as technology partners. Yes, well the ecosystem is really important. So if we think about it from a developer perspective, obviously supporting these open frameworks is critical, right, so supporting Linux and Docker and Spark and all those things. But also to be able to innovate at the right pace we need, particularly for things like cognitive workloads, that's why we created the Open Power Foundation. So we have more than 300 partners that we're able to innovate with that allow us to create the solutions that we think we'll need for these cognitive workloads. What is a cognitive workload? So a cognitive workload is what I would call an extremely data-hungry workload. The example that we can all think of is we're expecting, when we experience the world around us, we're expecting services to be brought to us, right, that the digital economy understands our desires and wants and reacts immediately. So all of that is driving, that expectation is driving this growth in artificial intelligence, machine learning, deep learning type algorithms. Depending on what industry you're in, they take on a different persona. But there's so many different problems that can be solved by this, whether it's I need to have more insight into the retail offers I provide to an end consumer, to I need to be able to do fraud analytics because I'm in the financial services industry. There's so many examples of these cognitive applications. And the key factors are just tremendous amount of data and a constrained amount of time to get business insight back to someone. When you do these integrations and you talk about the security investments that you're making, how do you balance the resource allocation between, say, IBM platforms, mainframe, power and the OSs, the power and those, and Linux, for example, which is such a mainstay of what you guys are doing. Are you doing those integrations on the open side as well and Linux and going deep into the core? Is it mostly focused on sort of IBM owned technology? So it really depends on what problem we're trying to solve. So for instance, if we're trying to solve a problem where we're marrying data insight with a transaction, we're going to implement a lot of that capability on ZOS because we want to make sure that we're reducing data latency and how we execute the processing, if you will. If we're looking at things like new workloads and evolution of new workloads and new things that are being created, that's more naturally fit for purpose from a Linux perspective. So we have to use judgment. A lot of the new programming, the new applications are naturally going to be done on a Linux platform because once again, that's a platform or choice for the developer community. So we have to think about whether we're trying to leverage existing transactions with speed or whether we're allowing developers to create new assets. And that's a key factor in what we look at. Jimmy, your role is somewhat unique inside IBM. I mean, the title of GM Systems Development and Strategy. So what's your scope specifically? So I'm responsible for the systems development involved in our processors, mainframes, power systems and storage. And of course, as a strategy person for a unit like that, I have responsibility for thinking about these hybrid scenarios. And what do we need to do to make our clients successful on this journey? How do we take advantage of their tremendous investments they've made with us over years? We have strong responsibility for those investments and making sure the clients get value and then also understanding where they need to go in the future and evolving our architecture and our strategic decisions along those lines. So you've influenced development in a big way. Obviously, it's a lot of roadmap work, a lot of working with clients to figure out requirements. Well, I have client support too, so I have to make sure things run. What about quantum computing? This has been a big topic. What's the roadmap look like? What's the evolution of that look like? Talk about that initiative. Well, if I gave you the full roadmap, they'd take me out of here with a hook out of the stair. We couldn't, you're too good for that. Yeah, almost got it from you. But we did announce the industry's first commercial, universal quantum computing project a few weeks ago, it's called IBM Q. So we had some clever branding help because Q makes me think of the personality in the James Bond movie who was always involved in the latest R&D research activity. And it really is the culmination of decades of research between IBM researchers and researchers around the world to create this system that hopefully can solve problems that are unsolvable today with classical computers. So problems in areas like material science and chemistry. Last year, we had announced quantum experience, which is an online access to a quantum capabilities in our Yorktown Research Laboratory. And over the last year, we've had more than 40,000 users access this capability. And they've actually executed a tremendous number of experiments. So we've learned from that. And now we're on this next leg of the journey. And we see a world where IBM Q could work together with our classical computers to solve really, really tough problems. So it's pretty sad. And that computer is driving a lot of the IoT, whether that's healthcare, to industrial, and everything in between. Well, we're in the early stages of quantum to be fair, but there's a lot of unique problems that we believe that it will solve. We do not believe that everything, of course, will move from classical to quantum. There will be a combination and evolution of the capabilities working together. But it's a very different system and it will have unique properties that allow us to do things differently. So what are the basics? Is like why quantum computing? I presume it's performance, scale, cost, but it's not traditional binary computing. Is that right? It's very different, in fact. Oh, we just got the two-minute sign, all right. It's a very different computing model. It's a very different physical computing model, right? It's built on this unit called a Q-bit. And the interesting thing about a Q-bit is it can be both a zero and a one at the same time, so it kind of twists our minds a little bit. But because of that and those properties, it can solve very unique problems. But we're at the early part of the journey. So this year our goal is to work with some organizations, learn from the commercialization of some of the first systems, which will be running a cloud-hosted model, and then we'll go from there. But it's very promising. And the timeframe for commercial systems, have you guys released that? Within the next few, well, this year we'll start the commercial journey, but within the next few years, we do plan to have a quantum computer that would then basically outstrip the power of the largest supercomputers that we have today in the industry. But that's, you know, the over the next few years we'll be evolving to that level, because eventually that's the goal, right? Is to solve the problems that we can't solve with today's classical computers. Talk about real quickly in the last couple of minutes, blockchain, and where that's going, because you have a lot of banks and financial institutions looking at this as part of the messaging and the announcements here. Well, blockchain is one of those workloads, of course, that we're optimizing with a lot of that security work that I talked about earlier. So the target of our high security blockchain services is Linux One. It's driving a lot of our encryption strategy. This week, in fact, we've seen a number of examples of blockchain. One was talked about this morning, which was around diamond provenance from the Everledger organization. Very clever implementation of blockchain. We've had a number of financial institutions that are using blockchain. And I also showed an interesting example today, Plastic Bank, which is an organization that's using blockchain to allow ecosystem improvement or improving our planet, if you will, by allowing communities to exchange plastic, recyclable plastic for currency. So it's really about enabling plastic to be turned into currency through the use of blockchain. So a very novel example of a foundational research organization improving the environment and allowing communities to take advantage of that. Jamie, thanks for stopping by the queue. Really appreciate you giving the update and insight into the quantum, the queue project and all the greatness around all the hard work going into the hybrid cloud. The security obviously is super important. Thanks for sharing. It's good to see you. Okay, we're live here in Mandalay Bay for IBM Interconnect 2017. Stay with us for more live coverage after this short break.