 Live, from San Francisco, it's theCUBE, covering Informatica World 2016. Brought to you by Informatica. Now, here are your hosts, John Furrier and Peter Burris. Hey, welcome back everyone. We are here live in San Francisco for Informatica World 2016. This is SiliconANGLE Media's theCUBE. This is our flagship program. We go out to the events and extract the signal from the noise. I'm John Furrier, co-CEO of SiliconANGLE Media with my co-host, Peter Burris, head of research at SiliconANGLE Media, GM of our Wikibon Research Group. Our next guest is, I have Bill Burns, Chief Information Security Officer, interim CIO at Informatica. Welcome to theCUBE. Thank you, welcome to Informatica World. Great to have you on. Great to get the insight. Love to dig into security because we could do an hour segment just on that one piece, so. Great, me too. I'm going to take, so you've been at, worked at Netflix. I go back and see you're here at Netscape, back on the day. Variety, host your own blog and podcast. So this is kind of live podcast, if you will, created that way. And now, obviously, you're an advisor to Sky High Networks. It's a security. Yep, yep. You've seen a lot of stuff. You've seen the evolution on-prem, you've seen the cloud, obviously, Netflix pioneer of the Amazon. We all know that story on theCUBE. What's going on today? What are you working on Informatica? What is your role and how does that relate to the current trend of security today, which is, everything's now in the open. Right, right. Well, I mean, data is a board level issue. Data security is also a board level issue. And so I was brought in about two years ago to sort of help Informatica transition from just products, trustworthy products to products and cloud services. So we're taking some of our capabilities and products and putting them in a cloud. And the conversation then changes when a company wants to buy a product and they have to sort of figure out all the trust and the security and the compliance. Now what they're going to do is they're going to say, Informatica, we're going to send our data to you. So the way that you measure trust and compliance is different. There's different signals that a company like Informatica has to send now to a customer and say, this is how we can, this is how that customer can trust Informatica with that data. So my team's responsibility is to help the legal team, the compliance teams, the product teams to understand what do we need to build and then what do we need to communicate to our customers? We heard from Informatica execs. We had all the top execs on the board member, Bruce was on earlier. And I asked him about Informatica and he's also on the Oracle board, so he's got his ability to kind of the big dogs, mainly Oracle. And I said, what kind of Informatica is this? And he said, the Switzerland approach is the key. And you mentioned trust. How do you look at that? Because you do have to be like that Swiss bank account. You've got to be open, access to everybody, access to all clouds, all networks, all security-like paradigms, at the same time not below the trust equation. Yep, exactly. And there used to be, back in the day, security through obscurity, which meant, basically if I make it really complicated and I just don't talk about it, hopefully the hackers won't find out. Hopefully the bad guys won't exploit the weaknesses in my network. And as you're seeing companies move to open APIs and sort of be right on the internet with their cloud services or their platform, you now have to send a different signal of trust to your customers. So you can't just be sort of, I hope they don't find out or I hope they don't ask the hard questions. We have to build HIPAA compliance and we have to build SOC2 policies and procedures into our products so that a customer can trust us. Because if we send a product to a customer, it goes inside their firewall, it's their responsibility. There's really little for us to do to protect the company's data when it's in their own data center. But when the model flips and the customer sends, I'm going to send my data over the internet. Gosh, I'm really hoping that it's trustworthy, handled, processed, sent back to me, shared with other customers or other partners. That completely changes the responsibility to share responsibility now between what they do and what we do and what our service providers do. So it completely changes the model and you can't depend on sort of back room conversations and hope that it doesn't get exposed. You have to be very upfront with look, we're SOC2 compliant, we're HIPAA compliant and we'll reach more as we hit other industry verticals. But I want my customers to ask me, I want them to probe and to ask the hard questions and look for evidence. So that makes us more trustworthy. I think it's one of the things that software companies that are moving to the cloud don't talk enough about. Historically, software intellectual property has been covered by copyright law, which makes it a speech question, which means software companies are typically not culpable for what the client does or what the customer does with their products. When you're starting to provide a service, you're absolutely right. It comes under a completely different legal regime regarding who's responsible, culpability, shared culpability, the impacts of contracting, et cetera, and the legal team absolutely has to be involved. So as you go forward, the relationship between data and culpability, data and risk is really important. Is that itself an arrowhead for Informatica to talk about our services? We're actually, we know how to even provide data about a services we're providing because we are the data company as a way of establishing that new compact that leads to more. Yep, exactly. And the conversation is very much about data and not just you can trust me because I've said so, but what is the data to prove that? So we use Informatica products internally to help us understand how trustworthy is that system? Is that system deviating from other systems that should be identical, but how can we measure that? And in the old days, it used to be enough to do an annual audit or a quarterly audit. And the trend now is continuous audits. And we have products that will continuously help you audit your internal on-premise software. But we, as now a cloud service provider, we have to do the same thing. We have to continuously measure, monitor, and have data to back up our supposition that we think the system is safe and secure. Now we have to prove to ourselves with data that says, yes, that really is. And you have to provide the API that allows that information to be passed back and forth for both recording and reporting as well as control purposes. Is that also one of the things that as you start to roll this suite of cloud-based products out that's allowing you to accrete more data? Are companies actually saying their security is actually superior to what I can do? Why don't we move more of the data to them? Right, I think that's part of the promise of moving to a cloud ecosystem is that a company should be able to focus on their core competency. Whether that's mapping or sending out letters with precision that has the addresses completely verified. Whatever your core competency is, you should focus on that. You should let the systems and the cloud services in your partners worry about the security and the compliance of your data or the privacy, that's another big issue. So we as a cloud service provider have to have that as one of our core tenants. We have to bake that into our culture. We have to establish a culture of security and privacy by design. And then a customer who has sort of an unrelated core competency can rely on us to just handle all of that for them. The CISO has historically been a support role. Right. Your job is almost a product role. Right, yes. I'm selling the product of security, of trust. So how do you think that that generalized or in a generalized way plays out to your peers in the CISO community as they start moving away from, I'm the one that's here kind of taking care of security at this level to one who's actually saying I'm part of the effort to establish a quality brand because digital becomes a key piece of the brand and the trust of the brand is increasingly a function of the trust of the belief in the digital assets. Yep, absolutely. It becomes a competitive differentiator. If a company like Informatic can build on-prem products and that are trustworthy and have a good reputation and then move into the cloud services space and also through certifications and other tests prove that we are trustworthy there, that changes it from a very tactical sort of back office conversation to a competitive differentiating conversation that's saying here's how we're leading the charge. And I have other CISO peers that are also in charge of security for customer-facing, cloud-facing products and services. And so we're constantly sort of comparing notes on what works, what doesn't work. Do you allow a customer to see all of your internal audit report details or do you go to a third party who does something like a SOC2 and you sort of let an interested, trustworthy third party sort of make those assertions and there's an economies of scale either way. Dave Vellante, my co-host also co-CEO still doing a media is running a CXO chat and he's specifically taking the thesis of trying to tease out what is that board level conversation with respect to security and how do folks within the organizations go to the board? What's the orientation? What's the point of view? What are some of the conversations? So I'm sure if he was here to ask so I went to his crowd chat and I want to get your thoughts on one. Probably yes, it's a board level conversation. What is the orientation point of view in conversation? And second question is do organizations understand the value of the data? So first part, the board conversations. What's interesting was a couple of years ago there was a change in tenor. So the National Association of Corporate Directors, NACD, it's a non-profit, I believe it's a non-profit organization. They published research, they provide guidance to boards of directors and one of the things that they brought up was cyber risk and how boards need to worry about this in a more focused manner and they came out with some guidance and it was free guidance, you didn't have to be a member and it talked specifically about risk metrics, equations or dashboards, questions for boards to ask their CIO, their CEO, their CISO and it overnight changed the conversation from a CISO saying, gosh, I really wish I could get time with my board, I wish I could get a seat at the table too. Oh no, I've been called to the table and they have very specific questions and they have great sample dashboards and metrics saying, tell us that we're doing as least as good as our peers, what is the conversation? So a couple of years ago that changed dramatically and now you see the increase in breaches, you see privacy being Fortin Center with something like the GDPR from the EU, you now see security and privacy becoming a board issue, data as you said was a board issue, the more that the data moves and you use it, the more risky it becomes. So now as companies try to become more interconnected with their partners and their data, now they're saying, but what about the security and privacy? So now that's where the CISO comes into the conversation and they say, here are the types of things that we're monitoring, both the regulatory, sort of the mandatory stuff, but if you do security well, you get compliance sort of for free. So if you build solid security controls in place, you can check the box for all the audit controls because you're doing things you should have been doing anyhow. So now that gives you comfort when you go talk to the board and say, here's what we're doing, here's the risk for managing. Oh, by the way, of course we're compliant because we're sort of iterating. So folks that want to see the table that you just look at that data and where's the reference to it? Is it published or? For the NACD? Yeah, the NACD is a good one. So NACD, if you do a Google or Bing search for NACD, cyber risk. Let me do that. They have a couple of white papers, both on cyber risk, they have a bunch of, obviously other things, but then also they talk about risk management, breach response, they have a really great set of free materials that I tell my security folks, like, watch out, this is coming. Yeah, yeah, it's a pre-study for the meeting. Exactly, when you go to, so Salesforce is a great partner of ours, and I think they do an exemplary job of being very transparent. So they have a trust website. It's not a security website, not a compliance website, it's a trust website, because ultimately that's the business issue. If they have a trust website you can go, you can pull up their security certifications. Amazon, great partner, they also have a trust website. We're building that out as well. So we have a trust website that will have more of that, those indicators, and obviously, we want to have a conversation with the partners. Okay, so the second part of the question was, do organizations understand the value of their data? Right. If I may, is there a valuation mechanism? We're always at Wild West. Yeah, I would say we're in the very early phases of that right now. So I think you can derive value sort of as the, what was it, Metcalf, who was talking about the network effect. So the more combinations, the more connections you have to a system, the more value it becomes, I think there's the same corollary for data. So if you have an API, for instance, you know, a set of open APIs and you're connecting partners to that data stream or those data streams, it becomes more valuable. You know, multiple devices per person is telling you that data is more interesting both to the person and to all of the companies that power that information. Just observationally, do you think on a percentage basis, order of magnitude, what percentage of organizations value the data? Truly value, like really look at it. I would bet 30%. I would say intrinsically, they know more or more of them know that that's worth, but they don't really have a way to sort of ascertain what that is. But you know, in 2008, the number of devices exceeded the number of people on the internet. And in 2020, the number of devices is going to double. So the amount of data that's going to be generated is going to be huge. It's going to be, you know, and I think it's going to be monetized much more than it is today. On the flip side, the attackers also find that extremely valuable. So now we're in this ever-increasing arms race where it's monetizable both from the good guys and the bad guys. So you've been involved in a lot of innovation, companies that go back to Netscape, for instance, Netflix, and companies you're involved with, advisory, certainly, and Informatica now, under the new leadership and the going private, got their groove swing going on. Just at the beginning, we think it's going to be a nice run. What is the challenge of innovation? From an innovation strategy standpoint, data will enable more things to happen. IoT clearly is, you can see that, you know, long path, there's going to be a lot of action there. Security inherently is, oh, hold on, compliance and all new regulations coming around the corner. I'm sure there'll be some sort of IoT regulation. Who knows? There'll be something, some compliance, things are going to come down the pipe. The challenge is thwarting innovation. That's what people worry about like us. It's like, okay, we get that. We trust this huge, but also innovation now is front and center. What do you see there? What's your thoughts? And how do you step playing out? And how do you advise or would share with folks watching that want to get the innovation but at the same time not get fired or lose their job or just screw up the basic stuff? Right. The way that we've designed software in the past just isn't going to work anymore. So right now, it's sort of designed, minimal viable product, you know, try to go as fast as possible and then include the security guides at the last minute because you hope they don't sort of screw things up or slow things down or stop your product. You have to design the privacy and security up front. I think we're going to hear that more and more, especially as GDPR becomes, so it's just been enacted and now we have a two year runway to get out by 2018 May to be compliant with that if you do business. What's GDP, I'm sorry. So it's the global data privacy regulation. So that's, you know, EU personal data outside of the EU transformation. Got it, yeah. That's sort of going to set the foundation for everyone who has personal data to share. So as companies are trying to work in that world, they're going to have hard obligations on security and privacy. The security teams, the more sort of empowered ones are going to be working more closely with product, compliance, legal, marketing up front and saying, okay, is this data that interesting? Is it that restricted? If it's not, set wide guardrails. The more interesting, the more personally identifiable information becomes or healthcare related, the tighter that squeeze is going to be on a guardrail sort of metaphor, but you want to do that instead of having gates and roadblocks for the development practice. Let me build on that. So that the way I used to describe and test this, because I think it's comparable to what you're saying is that we used to architect security around restricting access, and now we're designing security about facilitating sharing. And is that kind of what you're saying, that we have to move from restricting architected access to design with a design approach where we're facilitating sharing? Right, and I think sort of play on that. So I agree, it used to be, you had a data center and that's where your data was. And if you were inside the data center, you were trustworthy. And if you were outside the data center, you were not trustworthy. That model has gone away. The perimeter-less security is what they call it now. Really what that means is the perimeter is not gone, it's just moving closer to the data. So there's a lot more perimeters. So you're designing for not this trustworthy, not trustworthy model, but this assume no trust and you have to earn the trust to get access to the data. So everyone has to authenticate, everyone has to authorize themselves and then you start building levels of trust. So there are this notion of zero trust security that my good friend John Kindervag is promulgated for a while now. Not a lot of products doing that. But without question, it's going to require a lot of big data approaches to make it work. So talk a little bit about how you envision the role of big data playing in establishing that notion of zero trust or perimeter-less security. I think it was, so the back in the day, it was easy to say, this person is inside this network, therefore I trust him or maybe I trust him more. That was a sort of static, things didn't move that fast. In the new world, assume that my corporate network is essentially Starbucks. My employee could be anywhere. He has to prove who he is to my network or to my data or to my applications. Those things change instantaneously and they're a broad scale. So I don't have control over all the infrastructure and policy enforcement points. My employees could be everywhere accessing pretty much anything. I need real-time data. I need big data analytics to tell me is the behavior that this person is exhibiting, is that typically for him what he does or is it very anomalous? Or is he trying to show up in a couple of places at once implying that maybe his credential was stolen? The tools that we use before relied on sort of static, human-oriented, human-powered processes. That's not gonna keep up. Actually it's even stationary device oriented, right? It's, this person is at this terminal at this time and if he's not, he shouldn't be here. Right, right. And the penalty for slowing that person down was minimal. But now it's, you know, I got off the plane, I'm on my iPhone, I get off the plane and I travel someplace else. I've got agents working for me, you know, these sort of intelligent agents. And that's just employees. Now we add to it the whole notion of customers and customer experience and, oh sorry, you wanted to use our service off your phone as you were moving in Las Vegas. We're not gonna let you do that. Exactly. Okay. And you're gonna get into the world where you have intelligent agents doing things on your behalf. And, you know, how do you give trust to, you know, your Googlebot that's doing some work for you? That's not a static model where Bob logs in and sits down at his desk. Yeah. Greg, I mean it's a great segue. I was just gonna jump into the whole AI question. Yeah. So I got to ask you that AI is hot and so Google I.O. just recently this week. It's all about AI and that's their way to catch up. But John Markoff, just my good friend John Markoff, just published an article in The New York Times. I want to get your thoughts on this, how it ties in. On Tuesday, there was an event sponsored by the White House Office of Science and Technology Policy, where they explored questions about autonomous systems, making decisions without human input, like warfare, transportation, healthcare. You start to see AI come in based upon the data. It's been unregulated. Now you start to see the White House start to explore back to our regulation question. Everything's been regulated at some point. Again, data drives a lot of this, which are thoughts on AI in particular. And some of these emerging areas that are actually pioneering, but also dangerous, kind of like policies, any questions around. And now I know security's kind of in there, but it's security policy, if you will. Your thoughts. I think people are getting comfortable with the idea of recommendation engines. So if I buy certain things, then Amazon or Netflix will recommend these other things that are adjacent or similar and I'm fairly comfortable with that. So I think kids growing up these days, that's table stakes. Of course every service you're going to use is going to recommend things. They're going to be more comfortable with cars making recommendations on when to stop or when to avoid other traffic. So I think we will find ourselves getting more comfortable over time with things like that and the recommendations or the decisions they make will become stronger. They will be more impactful, not just if you don't buy this shirt, it'll be a deal. What happens if the robot makes a recommendation and ships it to me? And I didn't even ask for it. I'll be delighted when they get it right and I'll be disappointed when they get it wrong. They'll get it right more often over time. Hopefully they will recommend stopping the car when the car needs to be stopped too. I'm certain that we will have issues like that. Like I said, our ability to write flawless software is we're not very good at that. So we need the tools to help write better software that fails in a safe manner but also makes better decisions that we're confident and maybe there's a self-regulation model where as the recommendations get better, it itself starts making more impactful decisions. We metadata about the metadata and algorithms to please the algorithms. I mean, we're getting to a point now where the quality of the data really impacts us. A learning machine can only be as smart as the input coming in, right? Right, right. Well, a big part of that question also is we have to start using real language that actually describes what these things are doing because doing just a better job of text-based analysis is hardly real intelligence. It's part of the challenge needs to be what is the system actually doing as a way of describing therefore what set of responsibilities should it have. Right, right. And Ray Kurzweil has said, we're at the cusp now where we will have a hard time as humans being able to distinguish whether I'm talking to a human or talking to an intelligent agent. I mean, obviously not person to person but on a phone, on an interface, it's going to be really hard to make that distinction and I think we're getting really, really close. Some would say uncomfortably close, some are sort of more optimistic. Uncomfort is, it means you're breaking progress. Don't you don't, you know, fall over. That's right. Final question, I know we get tight on time, I want to get your final thoughts. What are you excited about right now? So Informatica World's got a lot of stuff popping out of here in the keys of the kingdom, MDM, all this cloud stuff's great, the stuff you're working on. But in the technology realm, what gets you excited knowing that you have a security mindset, we want to get to this Nirvana state at some point of great algorithms, great software and learning machines, et cetera, et cetera. What are you excited about? So selfishly speaking for me and my team, I'm excited that after a year and a half, we finally get to talk about some of the security and the compliance we've baked into the infrastructure and into the product team. So that's been huge, because that was a lot of time, a lot of effort. But I think more generically, I'm excited about all the technology we've talked about and what's going to empower the new generation. You know, the kids that are just coming in to grade school and middle school, and they're taking all of this for granted, it's just amazing to think what they're going to do with all of this stuff and build upon. So that is what's really exciting to me, is to watch my nieces and nephews that are growing up in this space and picking up iPads and having recommendations sent to them. And then thinking, gosh, what could I do with that? So the whole STEM research, I think that's just fantastic. That's what really gets me excited. As a whole new generation of expectations as well. Right, yeah. Bill, thanks so much for sharing the insight on security and just your thoughts in general. Appreciate the great conversation we're sharing here, live in San Francisco for Informatica World 2016. I'm John Furrier with Peter Burris. We'll be right back with more live coverage after this short break. You're watching theCUBE. It's always fun to come back to theCUBE.