 We're back here at EMC World. I'm John Furrier, the founder of SiliconANGLE.com. This is theCUBE, our flagship program, where we go out to the events, extract the signal from the noise. We talk to anyone we can find who's smart, who's got the signal, and that person is Andy Brown, who's the CTO of UBS, so welcome to theCUBE. Thank you very much. So it's great to get folks that are actually deploying technology rather than the vendors selling them. And we've had EMC guys on earlier. We've had the CEO of ServiceMarch coming in. We like a lot, obviously pioneering an area that to me wasn't, for the guys that have been around the block of decade or so, we've been kicking web services, been kicked around for a long time, SOA, that was all kind of going on in theory, some bumps in the road, but now with cloud, this is all part of the requirements to have cloud. Have services, service led infrastructure, APIs, and there's still demand to have on-prem. I'm sure you guys have a lot of different requirements in your infrastructure. So tell us a little bit about your environment that you're involved in at UBS. Sure. So I mean, I think in many banks, we're still running on a lot of internal infrastructure. We definitely have hybrid environments, so we have a fair amount of what maybe called legacy, I guess, in another sections of the technology community. We still run fairly large mainframe environment, fairly large mid-range environment. Certainly a lot of x86 that isn't virtualized, typically in trading systems and so on, and then moving into our virtualized environments, which are generally pod-based. We've fairly much standardized those across the board, and now we're just about starting on private cloud implementations, again, all behind the far wall, and virtual private cloud, I think, is interesting to us. It would have to be single tenant in order for us to really be interested, and we clearly would never put anything like client data into any data centers that we didn't control. Yeah, so you guys look at the cloud as, hey, we see it coming, we want that, but no way we're putting our data in there. I think the problem with the word cloud is, when people use the word cloud, they're often being overloaded in their way they use the word. So internal private cloud, external private cloud, where it's truly single tenant, you own the data, you own access to the data, you own the entitlements, and in that case, I don't see that as anything more than a version upgrade, if you like, from standard virtualization. Where you're talking about public cloud, yeah, then sure, we're not big implementers of public cloud at all, and unlikely to be. Yeah, so what I love about banks and insurance companies is their IT departments have to be cutting edge because there's a lot on the line, right? Sure. You know, transactions on the housing, the insurance side, data's everything. Oh, but this is technology. Technology, and this is just huge R&D going on. So you guys don't skimp on any kind of budget. You're kicking the tires of a lot of things. So I got to ask you, honestly, the future of cloud is here. We want to get there, but it's in context to a couple different prisms, right? One is, one, is it a utility and economically going to be viable? Is it technically possible? And three, does it have, support the applications that run my business? Sure. So can you talk about your view as you look at the cloud, you look at this transformation journey that everyone's on, or undertaking. You guys might be a little bit ahead of the curve on that. How has the market shifted from to the application-centric view it is now, in IT, and how's that impacting the IT environment? I think maybe, if I start off by just talking about what's happening in our industry post the credit crisis, because that really puts a set of constraints and policy requirements on the way we do business. And then we can talk about what impact that has on the way applications are developed, who gets to run applications or maintain them and so on. So obviously, you know, since 2008, there's been a significant change in terms of the regulatory pressure on companies like ours, and specifically around legislation and regulatory changes on a per-jurisdiction basis that we need to be accountable to. What we see though is a lot of commonality around the sets of requirements that we need to be aligned with and interpretations of those requirements and allowing us to create frameworks if you like, around risk or security or data protection or data encryption, and mapping those onto industry standards like NIST, for instance, allow us to think about data protection in a fairly consistent way, regardless of the rules in any given jurisdiction. So we've tried to deliberately create a data protection strategy that is aligned to the type of data that we're protecting, and then that then gets aligned to the needs or requirements of any given jurisdiction that we're operating in. So it's a kind of framework, if you like, that overlays the way we build applications, the way we deploy applications, and so on and so forth. So we get, you know, in most countries, we have separate legal entities that are accountable to that jurisdictional authority for doing implementations. That might mean we have, you know, some cases, 40 or 50 different implementations of the same app to support different jurisdictions using the exact same functionality, but for example, the application data has to stay within the boundary of that country. So the policy management around our apps is extremely important, and the interpretation of policy from the regulators all the way into our infrastructure is also extremely important to us as well. The only way you can really deal with that complexity is to template, you know, use cases and make sure that the platforms that you're deploying have not only the ability to define policy based on templates, but also the ability to adapt templates potentially for use in a location where there might be a slightly different set of requirements than the core template, for instance. Yeah, so, you know, all this conversation makes me think of like continuous operations improvement, kind of like manufacturing operations. Complex, a lot of requirements. You can't really make a mistake, there's legal implications, and it could impact the business. So I got to ask you about automation, right? So obviously automation is key and the software to power that is important. How do you guys roll out these templates? I mean, do you just take the servers down? I mean, you have, you know, you want nondescriptive operations. Sure. You have to plan for it. How do you balance the people side of the equation versus the automation side? You want to automate things. Sure. But so how do you look at that? Well, we're in the early stages right now of deploying something that we call Dev as a Service. And what Dev as a Service does for us is it allows us to encapsulate those policy requirements in the instantiation of the environment that that particular application is being developed in, or potentially even just being supported in as well. So you need technology that supports that. You need a very crisp way of turning what is often legalese into technology requirements. So we have people who do that mostly and then we use platforms that allow us to essentially create policy centrally and then automate the deployment of that policy into the different jurisdictions, usually into virtual environments, not always. I mean, it could just be as easily be a bare metal provisioning event as a virtual provisioning event. In either case, doesn't matter. I mean, you still need to have the same policy deployed to support that requirement. Talk about storage for a second here. Software-defined storage is a big discussion. I know you guys have probably every storage vendor. I know you have NetApp and EMC and others. A lot of multi-vendor, I mean, all the big banks have multi-vendor. So how do you run storage? How does multi-vendor support translate into today's world of open, software-driven type in deployments? Because the old days was, oh yeah, I support this box, I'm going to route the traffic this way, I got the control plane here, but now you've got data. So how does multi-vendor shift to open, fully automated systems? I think actually it's interesting that people think of it as being a new problem. But if I think all the way back to the mainframe era where IBM had IBM spindles, along came Mando, along came EMC, along came HDS, and all of a sudden, the mainframe was a multi-vendor environment. You think back to the DEC era or any of the mini-computer vendors where they had it on their own for a while in terms of storage, and then EMC penetrated their market with storage platforms, and then later others did the same thing. I think we're seeing the same thing happening in cloud right now, except there are multiple disruption vectors happening at the same time. So first of all, incumbents, we have already storage that runs on the mainframe, we do have NetApp, we do have EMC, we have most of these vendors in our environment today. We see flash driving a completely different level of innovation than we've seen for decades, honestly. And we're tracking over 120 startups in this kind of flash or flash plus spinning storage space. And all of that talks to, I think, the need to have very clear tiering in terms of what storage is offering, what services with what service level for which tier of the architecture and at what price as well. And the ability to move data as data ages from more expensive, more highly performed, more highly resilient to something more like an archive, a trickle archive, now has massive cost implications. If you look at something like, let's say a Navanix or an Amazon Arctic or something like that, and you compare that to running in-house versions of, let's say, trickle, you can see huge differences in cost. I mean, all there's a magnitude difference in cost. Yeah, I mean, it's a major transformation. I'll take the flash as a great example. I mean, you have storage, tiering is one conversation, but you've got memory tiering, that affects the software development, right? So, I think memory now is just a storage media that happens to be very fast and connected directly into the CPU. And if you start off at the cache for the CPU and kind of work backwards, and you look at architectures like Google's architecture where you can take advantage of that as almost a multi-tier storage system, right? Then you start to see things like information lifecycle management happening almost on a more real-time basis between different tiers of memory into flash and then ultimately into spinning media or even into cloud-based storage. So, we just had Eric Pooleyuron, who's the founder, co-founder, and CEO of Service Mesh, and his big thing is validating. For them to be so successful, obviously as a startup, well, they're kind of growing company now, but I look at it as a startup compared to the big whales. They're application-centric and they look at it as the data center's operating system and they're doing a lot of automation management. What do you think of their vision? Well, I think the thing that's really interesting to me is, I mean, as one of the very early adopters of VMware in a prior life and one of the most successful implementers of VMware in a prior life, VMware was always sold into the infrastructure organization and was used as a way of automating the provisioning of infrastructure and allowed us to really make big savings in the data center around power and around cooling and around space. So it was clear that the client for VMware was the infrastructure team. Where we've moved now with cloud-based services is that the application development community has become the client. And Service Mesh is an example of a kind of platform that we've seen now being vended directly into the development community and acts, if you like, as an abstraction from the infrastructure service providing layer, which is what you'd expect with literally hundreds of vendors now in the private cloud space. So I think actually the kind of layer that the abstraction works in used to be the hardware abstraction layer and now it's much more into design patterns and, if you like, cloud abstraction versus, let's say, CPU or hardware abstraction. Yeah, that's, I mean, it's so awesome. I love, I'd rather do an hour on the queue with you, it's so good. Well, I got to ask you, so obviously with virtualization and now Flash, I mean, those are two disruptive enablers. And obviously we're seeing the trend already in this here in the storage business, software defined storage where you have the control plane and the data plane moving and, you know, pivot a little hub. We're going to have, you know, Paul Moritz on he's probably going to talk about data fabric and pivotal, et cetera, et cetera. And we're going to have Pat Gelsing around. We're going to ask them some pointed questions. But I want to ask you as someone who's, you know, in touch with a lot of this stuff. Okay, with virtualization kind of evolving to the point where it's no longer about the hypervisor, I got to have, you know, close to the server and I got Flash closer to the server. So that opens up the big data market as well as the software-led infrastructure. What does that, what do those two things do for the storage guys? How is those two technologies changing storage guys like EMC, like NetApp, like IBM? Well, I think virtualization almost made compute free. And I think what's happening with Flash is you're going to see memory almost being made free. In fact, that whole architecture that we just discussed between, you know, L2 cache all the way down to storage is becoming really, really cheap. And the ability to run big data, let's say alongside transaction processing, potentially even on the same infrastructure or on infrastructure that replicates in real time from the transaction engine into the report or query engine is now a reality. I mean, there's no reason why you can't do that today. We are seeing design patterns like that running in production. And, you know, the Flash piece of it is there for performance reasons, you know, inside the OLTP or transaction processing side of the platform. But it's equally important actually in the read side of the platform as well in ensuring that the data that's being read most frequently is on the hottest or fastest, you know, storage solution. We were talking earlier about Flash and Dave Vellante, who's on another live feed inside the hall here. We also talked about a couple of years ago at the Hadoop Summit when big data was starting to hit the stride. Our big conversation was, you know, the future's unwritten and the most creative radical breakthroughs haven't even been thought up yet. Meaning, creativity's the only bottleneck at this point or the only barrier to these new opportunities. So I got to ask you, and we just had that same conversation earlier about Flash. So the question is, given your experience and knowing kind of what you see around the corner in today's, what could you point to as some of the most radical things that might happen? It doesn't necessarily have to be true, but it's kind of a guess. I mean, you got to peek around the corner, you're a big bank, you got a big budget, you're seeing the trend, the infrastructure is there, virtualization's getting, hitting a stride, Flash is rising up in the memory tier, changing software infrastructures. What radical things is coming fast? Well, I think honestly, we've had three decades where the technology part of information technology has led in terms of innovation. But we are now in an era whether you look in, you know, RNA and DNA research, all the human genome, all the way through to massive amounts of data being made available for marketing, real-time advertising and so on. To me, all of the interesting, truly disruptive breakthrough use cases this decade are all going to be oriented around data of some kind or another. What I've seen so far around data, and big data in particular, is that vertical implementations from service providers or new vendors who actually understand a particular kind of data really well. Let's say they understand personal credit data from experience or axiom, or they understand wealth level, for example. These kind of vendors who really understand a vertical well are much more likely to be successful, I think, than people selling the arm supply, yeah, the arm supply of big data into the enterprise. And I think one of the reasons for that is a lot of end users or clients haven't yet got their head around what they would use big data to solve, e.g. what question would they ask of a big data infrastructure? And that's where I think these early stage providers in insurance, in finance, and other areas who are essentially encapsulating here's a piece of financial services or here's a piece of insurance services that you wouldn't think of asking questions to unless you could actually touch the data. Then all of a sudden data scientists start looking at it and they go, wow, if we overlay that with our data, we can do something really interesting here that no one's ever seen before. So going deeper and doing new things. Going deeper, doing new things, and having enough context on the vertical to understand where the opportunity lies is where I see traction being gained right now. Andy Brown with UBS. Thanks for coming on theCUBE. Great conversation. Thank you for your candor. This is theCUBE. We're attracting the signal from the noise. You're hearing it right from a large practitioner on the cutting edge, the bleeding edge. Not so bleeding, because you're in the financial market. So you have to make those transactions reliable with compliance. Thanks for coming on theCUBE. This is EMC World's three day coverage exclusively with theCUBE. Silicon Angles flagship program. We'll be right back with our next guest. We'll be right back with the short break.