 So, following on from the tone that they've set about where the standards have to go and what constitutes a good standard, we've got a very exciting cybersecurity panel on what is cybersecurity in 2015. I want to ask our panelists to come up to the stage. Edna Conway, the Chief Security Officer for Global Supply Change from Cisco. Marianne Mesappelli, who's the America CTO for Enterprise Security Services at Hewlett-Packard. Jim Hytella, RVP of Security, and Rans DeLong, who's, Rans DeLong for the, is Security and High Assurance Systems from Santa Clara University. And I'll introduce myself finally, I'm Dave Lowndes, I'm the Chief Technical Officer here at the Open Group. So, thank you all. So, and as always, we may have time at the end, I'm conscious that this is the last panel before lunch, of course lunch will be right outside, but please remember, if you have questions write them on cards and I'll ask my colleague Maggie to run around and collect them and we'll ask them from up here. So, we've heard now about the, we've heard all morning about the cybersecurity landscape and of course everyone knows about all the many recent breaches that were, have been mentioned this morning and obviously the challenge is growing in cybersecurity. So, I want to start asking a few questions and I think what I'll do is start off directing one to Edna if I may. And we've heard about the Verizon Data Breach Investigation or DBIR report that talks about the catalogs of various attacks that have been made over the past year. And one of the interesting findings was that some of these breaches of the attackers were on the networks for months before being discovered. So, I'll ask you to start, so what do we need to start doing differently to secure our enterprises? About now, better? Okay. I apologize for sitting at the end of the chair, but it's too big for me. So look, I think there are a couple of things. Continuous monitoring from my perspective is absolutely essential. People don't like it because it requires rigor, it requires consistency, it requires process. And the real question is what do you continuously monitor? So I think it's what you monitor that makes a difference. So access control and authentication, absolutely on our radar screen, but I think the real ticket is behavior. So the real question is what kind of behavior do you see authorized personnel engaging in that should send up and alert? So that's a trend that I think we need to embrace more. And then I think the second thing that we really need to do differently is drive detection and containment. I think we try to do that. I think we need to become more rigorous in it. And so some of that rigor is around things like, are we actually doing advanced malware protection rather than just detection? What are we doing specifically around threat analytics and the feeds that come to us? How we absorb them? How we mine them? And how we consolidate them? And then the third thing I think for me is how do we get the right, I call that team the puzzle solvers? And how do we get them together swiftly? So how do you put the right group of experts together when you see a behavior aberration or you get a threat feed that says, wait a minute, we need to address this now. So when we see threat injection, we're actually acting on the anomaly before it makes its way further along in the cycle. Any reactions? Another thing that I'd like to add is making sure you have the executive support and processes in place. If you think how many plans and tests and other things that organizations have gone through for business continuity and recovery, you have to think about that sort of an incident response too. When we talked about earlier about how do we get the C suite involved, we need to have that executive sponsorship and understanding and that means it's connected into all of the other parts of the enterprise. So it might be the communications, it might be legal, it might be other things, but it's knowing how to do that and being able to respond to it quickly is also very important. Jim, Lance, any reactions? Yeah, I would agree on the monitoring being very important and the question of what to monitor, there are advances being made through research in this area, both modeling behavior, what are the nominal behaviors and how we can allow for certain variations in the behavior and still not have too many false positives or too many false negatives. Also on a technical level, we can analyze systems for certain invariance and these can be very subtle and complicated invariance formulas that maybe pages long that hold on the system during its normal operation and a monitor can be monitoring both for invariance, these static things, but they can also be monitoring for changes that are supposed to occur and whether those are occurring the way they're supposed to. The only thing I would add is that I think it's about understanding where you really have risk and being able to measure how much risk is present in your given situation. I think in the security industry there's been a shift in mindset away from figuring that we can actually prevent every bad thing from happening towards really understanding where people may have gotten into the system, where those markers that something has gone awry and reacting to that in a more timely way, so detective controls as opposed to purely preventative type controls. Thank you. So Jim, I'm going to leave you with the microphone. We heard from Don Myricks earlier about the convergence of virtual and physical and how that changes the risk management game. And we heard from Mary Ann Dee about how she's definitely not going to connect her house to the internet. So this brings new potential risks and security management concerns. So what do you see as the big IoT security concerns and how does the technology industry assess and respond to those? So and by the way, before I get into my comments, that brings to mind there's Mary Ann and Tony Corrado were giving a presentation this afternoon that is mislabeled on your agendas. It says they'll be talking about business transformation manager profile, which was a shock to them. They're in fact going to be talking about securing the internet of things. And I helped Tony and Mary Ann get some thoughts together on that. In terms of internet of things, the things that concern me are that many of the things that we've solved at some level in IT hardware and software and systems seem to have been forgotten about in by many of the internet of things device manufacturers. So we've got pretty well thought out processes for how we identify assets. We patch things. We deal with security events and vulnerabilities that happen. The idea that particularly on the consumer class of internet of things type devices, we've got devices out there with IP interfaces on them. And many of the manufacturers just haven't had a thought of how they're going to patch something in the field, I think should scare us all to some degree. And maybe it is, as Mary Ann mentioned, the idea that there are certain systemic risks that are out there that we just have to sort of nod our head and say, well, that's the way it is. But certainly around really critical kinds of internet of things applications, I think we need to take what we've learned in the last 10 years and apply it to this new class of devices. I'd like to add to that that we need a new architectural approach for internet of things that will help to mitigate the systemic risks and echoing concerns expressed by Mary Ann a few minutes ago in 2014. Europol, which is an organization that tracks risks of various kinds, criminal risks, predicted that by the end of 2014, it didn't happen, but they predicted murder by internet in the context of internet of things. And I think it's not far-fetched that we may be in overtime. Would we really know, actually? Mary Ann, did you have any, or Edna, do you have any reaction on that one? The murder by internet? That's the question you're giving me? Thanks. Welcome to being a former prosecutor. The answer is along their derrières. The real realities would be have any evidentiary reality to be able to prove that. I think that the challenge is one that's really well taken, which is, we're probably all in agreement, the convergence of these devices, right? We saw the convergence of IT and OT, and we haven't fixed that yet. We're now moving with IoT into a scalability of the nature and volume of devices, that to me the real challenge will be to come up with new ways of deploying telemetry to allow us to see all the little crevices and corners of the internet of things so that we can identify risks in the same way that we have, while not mastered 100%, but certainly tackled predominantly across the computer networks and the network itself and IT. We're just not there with the internet of things. I think Edna, it also brings to mind another thing that we need to take advantage of the technology itself. So as the data gets democratized, meaning it's gonna be everywhere, the velocity, volume, and so forth, we need to make sure that those devices can maybe be self-defendable, or maybe they can gain together and defend themselves against other things. And so we can't just apply the old world thinking of being able to know everything and control everything, but to embed some of those kinds of characteristics in the systems and devices and sensors themselves. We heard about the need, in fact, when Ron Ross mentioned the need for increased public-private cooperation to address the cybersecurity threat. And Ron, I would urge you to think about including voluntary consensus standards organizations in that essential partnership you mentioned to make sure that you get that high level of engagement. But of course, this is a broad concern to everybody. President Obama has made a call for legislation on enabling cybersecurity and information sharing, and one of the points within that was shaping a cyber-savvy workforce, and many other parts of this is what public-private information sharing. So what more can be done to enable effective public-private cooperation on this? And what steps can we as a consensus organization take to actually help make that happen? So, Marianne, you wanna tackle that one and see where it goes. To your point, collaboration's important, and it's not just about the public and the private partnership. It also means within an industry sector or in your supply chain and third party. It's not just about the technology, it's also about the processes and being able to communicate effectively almost at machine speed along those areas. So you think about the people, the processes, and the technology. I don't think it's gonna be solved by government. I think, I agree with the previous speakers when they were talking about it, it needs to be more hand in hand. And so I think there are some ways that industry can actually lead that. We've got some examples, for instance, of what we're doing with the healthcare forum and with the mining and minerals forums that might seem like a little bit, but it's that little bit that helps, that brings it together to make it one more easier for that connection. And I think it's also important to think about, especially with the class of services and products that are available as a service, that's another measure of collaboration. If you as a security organization determine that your capabilities, you really can't keep up with the bad guys. They have more money, more time, more opportunity to take advantage either from a financial perspective or maybe even from a competitive perspective for your intellectual property. You really can't do it yourself, so you need those product vendors or you might need a services vendor to really be able to fill in the gaps so that you can have that kind of thing on demand. So I would encourage you to think about that kind of collaboration through partnerships in your health ecosystem. I know that people in the commercial world don't like a lot of regulation, but I think government can provide certain minimal standards that must be met and to raise the floor. And not that companies won't exceed these and use that as a competitive basis, but if minima are set in regulations then this will raise the whole level of discourse. So we could probably debate over a really big bottle of wine whether it's regulation or whether it's collaboration. I probably agree with Marianne. I think we need to sit down and say what are the biggest challenges that we have and take bold, hairy steps to pull together industry and that includes government and academia as partners, but I'll give you just one example. So E-SIDs, right, they're out there, some of our ECID on semiconductor devices. There are some semiconductor companies that already use them, there are some that don't. A simple concept would be if we could make sure that those were actually published on an access control base so that we could go and look as OEMs who actually utilize those devices and see whether the E-SID was actually utilized, number one, then we could actually deploy internally in our own, granted here I go on supply chains, but on our own supply chains where we're actually deploying the right E-SID from a forensic analysis perspective that gives us a lot of gain. Sounds really easy to do. That challenge is bringing an entire industry together. I was talking to Dr. Ross earlier this morning about something that he just read on LinkedIn about me and we're talking about the challenge on STEM inside of New Hampshire. And the challenge is the teacher's unions. It's not STEM education. So I think where you live is exactly where I live, which is we need to sit down together and say, what are the big things that we can do? And someone might not be so big in the sense that they are tangible, ascertainable and achievable, but we have political environments which allow us to face that challenge with real meaty discourse. And until we sit down and do that, we're never gonna get over that IOT hurdle that you just articulated. Okay, thanks. So Jim, I think this next question's about standards evolution. So we're gonna send it to someone from a standards organization. So the cybersecurity threat evolves quickly and protection mechanisms evolve along with them. It's the old attacker defender arms race. And standards take time to develop, particularly if you use a consensus process. And so how do we change the dynamic? How do we make sure that the standards are keeping up with the evolving threat picture? And what more can be done to speed that up and keep it fresh? I think I'll go back to a series of workshops we did in the fall around the topic of security automation. And in terms of the open groups perspective, I think a standards development works best when you have a strong customer voice expressed around what are the pain points and the requirements and issues. We did a series of workshops on the topic of security automation with customer organizations. We had maybe a couple hundred inputs over the course of four workshops, three physical events and one that we did on the web, collected that data and then are bringing it to the vendors and putting some context around a really critical area, which is how do you automate some of the security capabilities so that you are responding faster to attacks and threats. So I think just generally just the idea that we bring customers into the discussion early, we make sure that their issues are well understood. That helps motivate the vendor community to get serious about doing things more quickly. And I mean, one of the things that we heard pretty clearly in terms of requirements was multi-vendor interoperability between security components is pretty critical in that world. So it's a multi-vendor world that most of the customers are living with. And so building interfaces that are open where you've got interoperability between vendors is a really key thing. So reactions? Rance? Yeah, I think it's a really challenging problem because in emerging technologies where you want to encourage and you depend upon innovation, it's hard to establish a standard. It's still emerging. You don't know what is going to be a good standard. So you hold off and you wait and then you start to get, you get innovation, you get divergence and then bringing it back together takes more energy ultimately. Any others? Okay, good. So Rance, since you got the microphone, how much of the current cybersecurity situation is attributed to poor blocking and tackling in terms of the basics, like doing security architecture or even having a method to do security architecture. Things like risk management, which is where Jim and the security forum have been looking into. And not only that, translating that theory into operational practice and making sure that people are doing it on a regular basis. Well, if I'm to believe a report I read on SANS that there was a US government issued report on January 20th of this year that many or most or all of our critical weapons systems contain flaws and vulnerabilities. And one of the main conclusions of that, in many cases, it was due to not taking care of the basics, the proper administration of systems, the proper application of repairs, patches, vulnerability fixes and so on. So we need to be able to do it in critical systems as well as on desktops. I think we found in more research reports and other things about the, like for instance what you might consider the open source code crisis that happened over the past year with Heartbleed and others to where the benefits of having that open source code is somewhat offset by the disadvantages of not everybody's looking to make it the right kind of quality or security for what your organization needs. So I think that's maybe one of the areas where the basics maybe need to be looked at. It's also because those systems were created in an environment when the threats and we're in an entirely different level. And so that's a reminder that we need to look to that in our own organization. The other thing like for instance in the mobile applications where we have such a rush to get out features and the revs and everything like that that it's not entirely embedded in the systems lifecycle or a new startup company. So I think those are some of the other basic areas where we find that the basics, the foundation yet needs to be solidified to really help enhance the security in those areas. Jim, did you have a? I do, so in the world of security it can be a little bit opaque when you look at a given breach as to what really happened, what failed and so on. But enough information has come out about some of the breaches that you get some visibility into what went wrong. And I think of the two big insider breaches, WikiLeaks and then Snowden. I mean in both cases there was fairly fundamental security controls that should have been in place or maybe were in place but were poorly performed that contributed to those, right? Access control type things, authorization and so on. Even some of the large retailer credit card breaches I think you can point to, they didn't do certain things right in terms of the basic blocking and tackling. So I think a lot of it, there's a whole lot of security technology out there, a whole lot of security controls that you can look to but implementing the right ones for your situation given the risks that you have and then operating them effectively I think is an ongoing challenge for most companies. Can I pose a question? It's one of my premises that sometimes compliance and regulation makes companies do things in the wrong areas to where they have a less security system. So what do you think about that and how that impacts the blocking and tackling? I think that has probably been true for say the four years preceding this but I think there was a study that I saw just recently I couldn't tell you who was from but it basically flipped that. For the last five years or so compliance has always been at the top of the list of drivers for information security spend and projects and so forth but it's dropped down considerably I think because of all these high profile breaches and senior executive teams are just saying okay enough I don't care what the compliance regulations say we're gonna do the things we need to do to secure our environment nobody wants to be the next Sony so. Or the target CEO that had to step down right? Even though they were compliant they still had a breach which unfortunately is probably an opportunity at almost every enterprise and agency that's out there. And on the subject of open source it's frequently given as a justification or a benefit of open source that it will be more secure because there are millions of eyeballs looking at it. It's not millions of eyeballs it's the right eyeballs looking at it the ones who can discern that there are security problems so it is not necessarily the case that open source is gonna be more secure because it can be viewed by millions of eyeballs it's you can have proprietary software that has just as much or more attention from the right eyeballs as open source. And I think there's also those million eyeballs out there trying to make money on exploiting it before it does get patched. That's true I mean you know. The new market economy. Well and I was just gonna mention that we are now seeing that some large companies are paying those millions of eyeballs to go look for vulnerabilities strangely enough which they always find them in other people's code not their own. Our zero day initiative that was part of their business model is to pay people to find things that we could implement into our own products first but it also made it available to other companies and vendors so that they could fix it before it became public knowledge. But some of that economics is changing too so that trying to get the white hatter so to speak to look at other parts that are maybe more critical like what came up with Heartbleed. On that point I'm gonna inject a question of my own if I may. On balance is the open sharing of information of things like vulnerability analysis. Is that helping move us forward and can we do more of it or do we need to channel it in other ways? I think we need to do more of it. I think it is beneficial. We still have conclaves of secretness around you can give this information to this group of people but not this group of people and it's very hard like in my organization which is global I had to look at every last little detail to say okay can I share it with someone who's a foreigner or someone who's in my organization but not in my organization. It was really hard to try to figure out how we could use that information more effectively and if we can get it more automated to where it doesn't have to be the good old boy network talking to someone else or an email or something like that I think it's more beneficial and it's not just the vulnerabilities it's also looking more towards threat intelligence. You see a lot of investment you even saw it in if you look at the details behind some of the investments in Incutel for instance about looking at data in a whole different way. So we're emphasizing data both in analytics as well as threat prediction being able to perhaps know where something's gonna come over the hill and you can secure your enterprise or your applications or systems more effectively against it. Let's go down the row. Edna, what are your thoughts on more open sharing? So I think we need to do more of it but I think we need to do it in a controlled environment. So for me I think we can get ahead of the curve with not just predictive analysis but telemetry to feed the predictive analysis and that's not gonna happen because a government regulation mandates that we report somewhere. So if you look at the new for example, DeFar that came out last year with regard to concerns about counterfeit mitigation and detection in COTS-ICT the reality is not everybody is a member of guide up and many of us actually share our information faster than it gets into guide up and more comprehensively. So again, I'll go back to it's rigor in the industry and sharing in a controlled environment. Jim, thoughts on open sharing? Good idea. I think it gets a little murky when you're looking at the zero day vulnerabilities and there's a whole kind of black market that's developed around those things where nations are to some degree hoarding them I think paying a lot of money to get them to use them in cyber war type activities. There's a great book out now called Zero Day by Kim Zetter, a writer from Wired that gets into the history of Stuxnet and how it was discovered and the semantic and I forget the other security researcher firm that found it but there were a number of zero day vulnerabilities there that were used in an offensive cyber war kind of a capacity so it's definitely a gray area at this point. And what also in dispute, I agree with what Edna said, the parameters of the controlled environment which the controlled way in which it's done without naming any names. Recently there were some feathers flying over a security research organization establishing some practices concerning 60 or 90 day timeframe which they would notify a vendor of vulnerabilities giving them an opportunity to issue a patch and in one instance recently when that time expired and they released it, the vendor was rather upset because the patch had not been issued yet. So what are reasonable parameters of this controlled environment? Let's move on here. So Edna, one of the great quotes that came out of the early days of OTTF was that only God creates something from nothing and everybody else is on somebody's supply chain which I love that quote. But given that all IT components or all IT products are built from hardware and software components which are sourced globally. So what do we do to mitigate the risks, specific risk resulting from malware and counterfeit parts being inserted in the supply chain and how do you make sure that the work to do that is reflected in creating preference for vendors who put that effort into it? Yeah, so I think it's probably three dimensional. I think the first part is understanding what your problem is. If you go back to what we heard Marianne talk about or Marianne Davidson talk about earlier today. The reality is what's the problem you're trying to solve and so I'll just use the trust the technology partner standard as an example of that. Narrowing down what the problem is, where the problem is located, actually helps you number one. And then I think you have to attack it from all dimensions, right? So it needs to be, we have a tendency to think about cyber in isolation from the physical and the physical in isolation from the cyber and then the logical for those of us who live in OT or supply chain, we actually have to have processes that drive this and if those three don't converge and map together we will fail because there will be gaps. There will be inevitable gaps. And so for me it's identifying what your true problem is and then taking a three dimensional approach to make sure that you always have security technology, the combination of the physical security and then the logical processes to interlock and try and drive a mitigation scheme that will never reduce you to zero but will identify things, particularly, think about IoT in a manufacturing environment with the right sensor at the right time, telemetry around human behavior. All of a sudden you're gonna know things before they get to a stage in that supply chain or a product life cycle where they can become devastating in their scope of problem. As one data point, there was a lot of concern over chips fabricated in various parts of the world being used in national security systems and in 2008 DARPA initiated a program called trust which had a very challenging objective for coming up with methods by which these chips could be validated after a manufacturer. And just as one example of the outcome of that under the IRIS program in 2010, SRI unveiled a infrared laser microscope that could actually examine the chips at the nanometer level, both for construction, functionality and their likely lifetime, how long they would last before they failed. Jim, Mary Ann, reactions? The only other thing I wanted to add to Ed's comment was reiteration about the economics of it and finding where the real problem is. Especially in the security area, information, technology, security, we tend to get so focused on trying to make it technically pure, avoiding the most 100% ultimate risk and sometimes we forget to put our business ears on and think about what does that really mean for the businesses, is it keeping them innovating quickly, adapting to new markets, perhaps getting into a new global environment. We really have to make sure we look back at the business imperatives and make sure that we have metrics all along the road that helps us make sure we're putting the investments in the right area because security is really a risk balance which I know Jim has a whole lot more to talk about. Actually the one thing I would add to this conversation is that we've sort of been on a journey to where doing a better job of security is a good thing. The question is when is it gonna become a differentiator for your product and service in the market, right? And I know for me personally, a bank that really gets online banking and security right, that's a differentiator to me as a consumer. I saw a study from, it was quoted this week at the World Economic Forum that basically said I had two to one margin consumers and they surveyed consumers in 27 countries think that governments and businesses are not paying enough attention to digital security. So maybe that's a mindset shift that's occurring as a result of how bad cybersecurity has been and maybe we will get to the point soon where it can be a differentiator for companies in the business to business context and a business to consumer context and so forth. So we can hope I think. So great point and just to really pivot on that and point out how important it is. I know that what we're seeing now and it's a trend and there are some cutting edge folks who have been doing it for a while but most boards of directors are looking at creating a digital advisory board for their company. They're recognizing the pervasiveness of digital risk as its own risk that sometimes it reports up to the audit committee and so I've seen at least 20 or 30 in the last three months come around looking for digital advisory board members to focus on this from multiple disciplines. If we get that right it might allow us that opportunity to actually share the information more broadly. That's a really interesting point that point about multiple disciplines and I think the next question and unfortunately the final question or fortunately since I'll get you to lunch. I'm gonna start off with Rants. At some point the difference between a security vulnerability failure or other kinds of failures all flow into that big risk analysis that a digital risk management regime would find out. And so one of the things that's going on of course in the real time in a vetted systems form is to look at how we architect systems for higher levels of assurance, not just security vulnerabilities but other kinds of failures as well. So I guess the question I'll ask here if a system fails it's SLA for whatever reason whether it's security or some other kind of vulnerability is that a result of our ability to do system architecture or software created without provably secure or provably assured components or the ability of the system to react to those kind of failures. And if you believe that how do we change it? How do we accelerate the adoption of better practices in order to mitigate the whole spectrum of risks of failure of the digital enterprise? Well, thanks for that complicated question with zero minutes remaining on the clock. That's okay. We shall add four minutes to the clock. In summary. Well, in the high assurance systems obviously we still treat as very important detection of problems when they occur, recovery from problems but we put a greater emphasis on prevention and we try to put a greater effort into prevention. You mentioned provably secure components but provable security is only part of the picture. When you do a proof, you prove a theorem and in a reasonable system, a system of reasonable complexity there isn't just one theorem. There are tens, hundreds or even thousands of theorems that are proved to establish certain properties of the system. And it has to do with proofs of the various parts, proofs of how the parts combine. What are the claims we wanna make for the system? How do the proofs provide evidence that the claims are justified? And what kind of argumentation do we use based on that set of evidence? And so we're looking at not just the proofs as little gems if you will. You know a proof of a theorem is a little, think of it as a gemstone but how they're all combined into creating the system. If a movie star walked out on the red carpet with a little burlap sack around her neck full of a handful of gemstones we wouldn't be as impressed as we are when we see a beautiful necklace that's been done by a real master who's taken tens or hundreds of stones and combined them in a very pleasing and beautiful way. And so we have to put as much attention not just on the individual gemstones which admittedly are created with very pure materials and under great pressure but also how they're combined into a work that meets the purpose. And so we have assurance cases, we have compositional reasoning and other things that have to come into play. It's not just about the provable components and it's a mistake that is sometimes made to just focus on the proof. Member proof is really just a degree of demonstration. And we always want some demonstration to have confidence in the system and proof is just an extreme degree of demonstration. Rantz, Jim, someone else? I think I would summarize it by embedding security early and often and don't depend on 100%. That means you have to make your systems, your processes and your people resilient. Jim? Good, Jim? You good? Okay, well, it's a very interesting panel. I wanna thank our panelists, Mary Ann Mesapella from Hewlett-Packard and Nikanwe from Cisco, Jim Hytella, our VP of security for the Open Group and Rantz DeLong from Santa Clara University. Thank you very much. Thank you.