 Hello everyone, are you ready for the next talk? Awesome, this is Dumby, he is enamored with defense and predictive things. His career is focused on security operation, but he loves understanding the way systems operate. He is passionate about investigating root cause incident, of incident, sorry, or the things that came to be the way they are. Security is a full stack cross-discipline field and he loves learning about and digging into all. So what are for the review? Dumby. Well that's a little overwhelming. All right, so yeah, anatomy of a mega breach. This is a talk that I built out of the Equifax report that was a result of lots of work. I'd like to start with an interesting quote that they included that I really love. So failure to maintain an accurate inventory undermines all attempts at securing OPM's information systems. That came directly from the Office of Personal Management's Inspector General. So to get started, who am I? As mentioned I'm Dumby, I'm a blue teamer and just a general security nerd. I love everything about all sources of security. The mass majority of the information for this talk as I mentioned is pulled from the U.S. House of Representatives Committee on Oversight and Government Reform report on the Equifax data breach. It was released in December 2018 and it contains references and official sources for almost all the information that I'm going to have in this talk. The report itself is amazing and I strongly encourage everybody to go check it out yourself because it is just full of details and very relatable information. So the report itself is laid out in this format. It has seven sections, consumer reporting agency business model. Two, regulations for consumer reporting agencies. Three, anatomy of the Equifax data breach. Four, Equifax notifies the public. Five, specific points of failure. Six, Equifax remedian efforts. And seven, recommendations. The sources for the report were internal emails, Congress testimony, the Mandiant report, Equifax disclosures and situational updates, investor releases, public speaking appearances and a government consent order. One of my favorites. So why this talk? I was giving, I was reviewing the Equifax report. We're pretty lucky in the security industry to have awesome reports pretty consistently. The Verizon DBIR is amazing. This just came out another one and I'm always reading them and finding nuggets of wisdom in them. So what do I want you to take away from this? An understanding of the timelines of a major breach. The simplicity of attacks, the scale and why previous research of breach metrics are so important. A review of why organizational structure matters and even nerds need to care about and executive politics. Or at least understand why it's important and you can support the people that actually get along with your executives. It's a great example of why forced action really sucks and why everyone should do work to avoid it. Basic hygiene and IT security matters. As we just heard in the panel before, you can get started with security no matter the scale or scope and even large companies don't do this stuff. It's not common that you find people doing all the basics. The sans top controls, the sys top 20, all of those go to the wayside. So starting from a small business up is very, very important. And also an appreciation and an understanding of just how much action there is in an enterprise as a result of a breach. Lots and lots of stuff changes, hopefully at least. So let's go over a brief timeline of the Equifax breach. March 7th, there is a struts vulnerability publicly disclosed as CVE 2017-5638. On March 8th, the next day, the US CERT notifies Equifax directly of the struts vulnerability. On March 9th, Equifax's global threat vulnerability management team, their GTVM, inform systems administrators of the vulnerability. On March 10th, forensics completed show, Mendient later completed forensics that show that on March 10th, some hackers at actually compromised boxes with struts vulnerability and just ran who am I and left them there. Nothing more than that. On March 14th, Equifax is emerging threats team. They have a dedicated emerging threats team, released a snorts signature specifically for the struts exploits that were public. Equifax's countermeasures teams deployed those rules to their IDS and IPS systems. On March 16th, the struts vulnerability was discussed at a monthly meetings of the GTVM, their vulnerability management team. The team is made aware of in the wild exploitation. They know this stuff is happening in the wild. Admins responsible for the struts are simply reminded to update immediately. So let's do a quick rewind. Within days of the struts vulnerability being disclosed, they had specifically updated their IDS with signatures for struts, but it didn't find any struts exploitation. The admins are reminded to check for vulnerable versions and update. All of this failed. Witness testimony collected indicates that the difficulties of scanning a complex legacy environment was the reason that the scans failed to detect the vulnerability. So let's talk some patching. Unlike Equifax, at least these guys really saw there was a problem and tried to fix something. And you're going to see that the common theme throughout this talk is that patching is core. It's critical. And also mentioned in the panel, sometimes you can't patch when you got to be aware of that. So legacy environments suck. So let's review the ACIS, the automated consumer interview system. It is a public facing system that is used to dispute incorrect information on your credit file. So anytime you get a delinquent bill that's wrong, you submit your files to the system. It was built in the late 1970s to address the Fair Credit Reporting Act. The VP responsible for the environment at the time of the breach was hired in 2014. It was built on sun servers, running struts. The inventory was reduced to less than 200 servers, but they were running a variety of versions of OS. The Apache Struts. So let's go over that real quick. So it's an application for deploying Java web applications. Struts 1 was released in 2000 and it's been maintained since then. Struts 2 was released in 2006 and maintained 27 vulnerabilities of a CVSS of 7.5 or higher since 2006. This is a big risk program and you should be very aware that it's in your environment. Also there have been additional 12 vulnerabilities since 2016, including six that were higher than a CVSS of nine. This is very serious stuff. There is no comprehensive software inventory of the ACIS. Patch management policy relied on administrators knowing their system software inversions and tribal knowledge. They relied on manually initiating patching process. There was no automation. The VP wasn't aware at the time of breach that struts was used in the environment. He gained this awareness July 30th when he was notified that they needed to shut down the system. When probed about how widely Equifax used struts, the CSO was not able to answer. They just didn't know. Conflicting reports on software inventory thoroughness. The CSO was not confident that a single comprehensive inventory was available to employees. This is critical. If you're running a big corporation like this, you've got to have an idea of what your inventory is. So there's a possibility that individual lists were maintained by IT and security, but they never had cooperation to synchronize and make sure that they were up to date. Equifax had a lot of people engaged in specifically inventory management, but at the end of the day it didn't matter because they just didn't bring it together. Legacy systems were noted as especially difficult to inventory. Scanners and automated systems for its information gathering weren't prepared to completely function with Sun servers. So on May 13th, attackers exploit struts too in the ASIS environment. On July 29th, Equifax detects the data breach. At 9 o'clock p.m., the counters measure team is doing standard maintenance and they upload 67 new SSL certificates to their IDS and SSL visibility platform. Four is specifically for the ASIS environment. SSL inspection resumes and almost immediately a review of packet captures detect suspicious requests to a Chinese IP. The server response contained more than 10 megabytes of data. That's just always weird. They also thought it might be image files related to credit investigations. While inspection of SSL traffic wasn't functioning, Equifax had been collecting PCAP and had historical data. The IT team implemented MOLOC and reviewed that historical information and detected that since at least July 25th there's been this activity going. So they were pretty lucky that they caught it within four days just by chance. The counters there's also an additional Chinese, German IP that was being leased to a Chinese provider that they discovered as well. And the counter measures team blocked the ISP in an attempt to try to prevent the attack from furthering. So on July 30th, the incident investigation continues. They conduct vulnerability tests of the ACIS environment. They discover rendering system vulnerable to SQLI and insecure direct object reference attacks. These were not discovered in a previous vulnerability scan that was run in April of 2017. So just a couple months and for some reason the results are changing. It is unclear why the scans produce different results. The forensics team discovered that there's exfiltrated data likely contained personal information, PII. The continued observation of suspicious traffic originated from a secondary IP to belong into a German ISP but was leased to a Chinese provider. At 1241 p.m. the ACIS web portal is shut down for emergency maintenance and that ends the direct cyber attack against Equifax. The CSO was informed of the incident at 1.30 p.m. and joins a conference call with the IT and security groups. An email is sent to the chief legal officer who was on vacation as well as his covering employee but neither responded that same day. At 6.30 p.m. the CSO, the chief security officer calls the senior manager for the ACIS application. She informs him of the situation and requests information about what is being run in the environment. Resources from his development team are requested to assist in the investigation and research of the potential breach. The senior manager informs the CIO of the incident at 7.16 p.m. On July 31st Equifax initiates an incident response so it took them up until this point to be confident that there was an issue that they needed to handle. They assigned a codame of Project Sierra to the IR efforts. At 7 o'clock a.m. they have a call to review the vulnerability assessment findings from the scan on the day before on July 30th. The CIO and the ACIS manager meet in the morning to discuss what is known but neither are familiar with how severe the incident may be yet. It's suspected that struts vulnerability was exploited. The ACIS developers provide a web archive file for forensics and security team verifies that indeed the ACIS environment has struts that is vulnerable to the recently disclosed vulnerability. CSO reviews information from forensic analysts and informs the CIO that PII is likely involved. So now the chief legal officer has been engaged and he's aware of the issue. The CSO does not inform the CIO of a concern that PII was breached. So again you have information being shared but it's not cohesive and it's not understood why some information is going to some places and not others. On August 1st the CIO has provided a brief update on the investigation. While it was progressing there was no new information known. The CIO leaves on vacation August 2nd and doesn't return for two weeks on August 16th. July 31st the CIO informs the CEO about the security incident and that limited information is available. On August 2nd Equifax contacts outside council and informs FBI of the breach. Outside council contacts Mandiant and they're hired to buy Equifax to complete a forensic investigation and determine the scope of intrusion. On August 3rd Mandiant begins forensic review that concludes on October 2nd so that's a very long investigation. Mandiant preserves databases searches for relevant queries used by attackers and they identify potential access points based on forensic markers. They recreate the attackers actions discovered the extent of information access at least preliminarily. On August 11th Mandiant identifies potential access consumer PII by the attackers. On August 15th IT employees informed the CEO that PII was likely stolen. On August 17th it's determined that large volumes of PII had been compromised. There is a meeting to review the status of the forensic investigation that includes the alphabet soup of executive CEO CIO, CLO, CFO and an ACES representative. The CIO returns from his vacation the previous day and gets up to speed. On September 1st Equifax convenes a board meeting to discuss the investigation scale of the PII compromise and notification plans. Senior leadership team meets again. The CIO discusses status of forensic investigation, number of affected records, possible causes and further investigative actions. Let me highlight that's a pretty big gap of about 15 days that they just kind of have to sit around and wait for the giant engine of corporatism to move around them. On September 4th Equifax with forensics information from Mandiant completes a complete a list of approximately 143 million affected consumers. They initiate Project Sparta and I consider this Equifax getting ready to go to war. They know that there's something big happening. They're trying to get prepared for it. So they have a response related effort initiated to prepare a consumer facing website. This website was meant for people to use in the public to identify if they were part of the breach and also to be able to register for the wonderful credit monitoring and identity stuff services that so many of us have now. There were 50 to 60 IT employees directed towards Sparta so taken from their normal duties and instantly thrown into this random development project to say, Hey, we're going to have a bunch of people hitting a website. Can you build this thing for us? They aren't informed exactly what has happened, but to hold prepare a web portal for a mass amount of consumers to hit systems. So Equifax runs on a business to business model. They provide information to other businesses that they can use to justify if you are worthy of their services or not. They are not prepared for 143 million people to access their services. They engage contracts to ramp up to 1500 call center agents when they normally have 500. So you can see they didn't really do a whole lot. Just because they they they had a poor perspective of how many people would be interested. So they prepare for the September 7th notification to the public. That's three days of development for an entire team of 50 to 60 IT employees. How many people have developed a simple web project with that many people in a day? It doesn't happen. It's impossible. So they they initially had the 500 call center employees. They ramp up to 1500 and then it takes them a few weeks to get to about 3000. On September 7th they announced the data breach. A cybersecurity incident affecting approximately 143 million U.S. consumers is announced. Type of information includes social security numbers, birth dates, addresses, and driver's licenses. So again 143 million people need to be informed and that's just the directly affected. You have everybody in the U.S. and Canada wondering if they're part of that. So the attackers also access 209,000 credit card numbers, 182,000 credit dispute documents that included PII. Those were just any random images you uploaded to support your case as to why you needed to get something off of your credit record. So consumers are directed to the infamous EquifaxSecurity2017.com domain. How many people remember the joy that that was? It was really fun that day. The stock drops 35% in the first week. It's nice to know that people actually took this seriously. It was a big deal. Federal regulators announced or confirm investigations and the U.S. cert team warns consumers of phishing scams leveraging the data breach. So issues with the Equifax Security2017.com domain. Website and call centers are immediately overwhelmed with requests and consumers are left without answers as to whether they are affected or not despite being designed for an intake of at least 143 million people over three weeks. Again, why they had a timeline that people were going to be like, I got like a couple weeks to check if I'm part of this. That's not realistic. So Equifax representatives on Twitter aren't clear which domain to use and some end up using the wrong domain. Security Equifax2017.com instead of EquifaxSecurity2017.com. There's a wonderful security researcher that set up a phishing page on the wrong domain. Luckily they used it as an awareness opportunity and didn't do anything malicious with it but it was super hilarious. Website provided consumers with incorrect or incomplete information. Some individuals who was sent to sign up for protection services were just told, ah, sorry we don't know how to do that right now. People received completing answers about whether they were affected by the breach. You could check on your phone or you could check on your computer or other computers and you would get different results checking the exact same website with the exact same information. It was hosted in a large cloud provider but the bottleneck was in the application developed by again 60 plus IT people in the span of one to three days. The internal systems couldn't keep up with them the demand because again they went to a cloud provider but they have all these back end systems to do their own validations and checks that couldn't keep up. A coding issue affected the capability to accurately identify whether a consumer was a victim of the breach or not. A rush on development likely led to the quality issue. And again this is all documented in the report. It's wonderful. September 15th the CIO and the CSO retire. The CEO retires on September 26th. On October 2nd of 2017 more victims are announced. Throughout the investigation Mandian reported had found some failed database queries but upon re-examination they actually discovered that they were successful so that came with an additional 2.5 million records. The senior manager of the ASIS environment is terminated. He was informed that the reason was a failure to forward an email. So I don't know how many people have ever been a CIO of an environment and been fired because they forgot to forward an email but that's pretty hilarious to me. The GTI so and also it was a GTVM email that was sent broadly to over 400 people. This wasn't targeted towards a specific group. It was just that vulnerability management team saying hey everybody that owns systems struts is vulnerable. Go deal with it. Because he didn't forward that he was fired. October 3rd 2017 the former CEO testifies before Congress. He repeatedly mentions the manager failed to act on security warning as a key failure. These ASIS manager a guy named Payne later testifies on the allegation he failed to forward the patch email on March 9th. He claims and very validly that there was no established process in the patch management policy that required him to take any specific action. The committee informs confirms that he was never directed by anyone to forward such emails. On March 1st of 2018 Equifax updates announcements from September 7th and October 2nd. It confirms the identities of an additional 2.4 million U.S. consumers that had names and partial driver's license information stolen. They were not previously identified in the affected population so the total number of individuals is brought up to over 148 million. So there are 12 standard data elements that were taken from databases and I'll cover those in the next slide here. As well as those images that were uploaded to the dispute portal. On September 6th of 2018 with the investigation starting in August it took Mandiant approximately 30 days to arrive at a firm number of impacted consumers. With the data and many tables and databases they had to do all sorts of assessment analysis to avoid double counting. They had to meet with individual database owners to track backwards and identify what these random disparate bits of information actually were because they were tokenized and all sorts of things which some of it was tokenized some of it was just randomly generated type stuff. But they had to do full investigation Mandiant had to go sit with database owners and walk through their schemas and all of that. Mandiant um on top of that Mandiant had to actually identify who the even who the owners of the databases were because this wasn't publicly or privately known information to Equifax themselves. So let's cover real quick the actual data that was taken. Alright so we have the name, date of birth and social security number of at least 145.5 million people. The full address of additional 100 million people. The gender and phone number of an additional 27.3 million people. The driver's license number of full number for 17.6 million. The email address is for 1.8 million people. Payment card information about at least 209,000 records. The full ID for an additional 97,500 people. Driver's license info for 27,000 and again 182,000 random images that could be anything uploaded to that dispute portal. So that kind of covers a timeline of everything and let's go over the really interesting bit of organizational structure and the dynamics and issues that resulted from that. So this is going to sound like a really long-winded corporate tirade but I think it's really important to understand that they did have policy in place. They had reasonable people doing reasonable work constantly. It just wasn't enough. So let's go over a brief history of the corporate ladder and I like this one because it's also kind of like a crappy Dr. Seuss rhyme for corporate. So prior to 2005 the CSO reported to the CIO Robert Webb. An internal restructure resulted in the CSO reporting to the chief legal officer instead. Richard Smith was hired as a CEO. Tony Spinelli is hired in 2005 as a CSO. Spinelli presents a three-year $15 million plan to reorganize IT security across the entire enterprise. There's fundamental disagreements and there's a devolving relationship. Security functions were removed from IT to legal. The CIO was removed from controlling security and it was placed under the CIO. The CIO was referred to as the head of security. That's chief legal officer if I didn't make that clear. 2010 David Webb is hired as the CIO, the chief information officer to replace Robert Webb. In 2013 Spinelli leaves Equifax. Susan Maldin takes over as the CSO. Despite the CIO Webb discussing the organizational structures reverting, it just never does. Maldin testified that she saw no issues with the structure of the CSO reporting directly to the CIO. The CIO were for uh the web testifies the previous CIO testified that this arrangement was atypical to him. In 2017 the CIO Webb retires in mid-September. Webb testified that two weeks prior to this he had discussed with the CEO and CIO about moving security group back under the CIO. On September 15th Equifax announces that Webb, the CIO and Maldin the CSO had retired and interim officials would temporarily fill the positions. The interim CSO reported to the interim CIO. This structure continued until February 2018 when a permanent CSO was hired and the CSO now directly reports to the CEO. That's a good move. Testimony proves the disconnect between the CIO and the CSO which created an accountability gap. Uh I don't know how many people have worked in blue team but you pull your hair out of your accountability gaps all the time. Uh C. So Webb attempted to separate IT from security and often referred the committee to the CSO. Uh I'm IT. Security is not my gig. Go talk to this other person. The CSO saw the responsibility as global and whose function was to set forth the rules and policies for the enterprise. It would then work with the IT team to implement the rules and security had established. There was no clear responsibility of enforcement for all of this stuff. Uh according to the organizational structure. There was no high degree of coordination and communication between security and IT groups. Collaboration between IT and security only happened when required. This separation also led to issues such as the lack of IT security coordination regarding asset inventory. Super important. Security group was responsible for the what and provided security engineering. Security could not affect changes to infrastructure. Uh in the case of a new controller appliance, security would make the request. IT would review and approve. Installing any necessary hardware software and then allowing security to get back in and configure it. They could operate software but not install it or change infrastructure. Security was responsible for ensuring the work was completed properly. IT implemented at the direction of the CSO. In April 2016, IT had a risk and compliance group and it was removed under pain. Who is also the ASIS manager that we talked about earlier. This added the responsibility for access management, IT audit coordination and IT security coordination. Pain initiated a process of monthly meetings between IT and security, CLO, CSO, CIO and he himself participated. Uh they identified 10 to 20 initiatives and started tracking them. The testimony from pain indicates the much uh the patch management and certificate deployment were included in those initiatives. A lack of both directly contributed to the breach. The CEO did not prioritize cyber security. Quarterly senior leadership meetings discussed IT security but only as one of many topics. The CSO was not considered a part of senior uh leadership. They were executive but you know just kind of down a couple tiers from the CEO. Uh information was generally presented by the CLO who had no background in IT or security. Again that's their lawyer. Uh the CSO was the company's IT expert but they didn't have a place in the the executive meetings to have their say. So a quick review of standard organizations. The Aponomon Institute survey found 50% of CSO report directly to a CIO. A 2018 Pricewaterhouse Cooper study concluded is more common for the CSO to report directly to the CEO. Or a board rather than the CIO. 24% of CSO report to CIO while 40% report to a CIO. Just kind of interesting stats there. September 2017 there was a revamp. So following all these announcements in the retirement of the CIO and the CSO, the CSO was retitled to the chief information security officer and they report directly to the CEO. The CIO was retitled the chief technology officer and he also directly reports or they also report directly to the CEO. This reorganization recognizes that cyber security is a core business function and it needs a place at the table. It is a drastically different from the previous organization and it does encourage collaboration between the CTO and the CSO which is super good. So structure of patch management. Again this stuff was in place during the time of the breach. So the patch management policy defined roles and responsibilities and established guidelines for the patching process. It designated two employees to lead implementation, a policy manager and a senior leadership team owner. The policy manager was to ensure that all the work was tracked and the leadership team owner was to ensure the organization conformed to the policy. 2016 they had an official patch management policy for the enterprise. It was in place on March 8th when the struts vulnerability was announced. The CIO was senior leadership team owner and the CSO was the policy manager. The business owner is informed of need to patch and responsible for approving downtime to apply it. The system owner responsible for applying the patch and application owner is responsible for ensuring it was applied properly. Many roles and responsibilities were defined in the policy but no employees were officially designated for the roles. Patches received a critical LED classification by vendors. While Ikefax could alter this they usually just adopted the vendor's classification. Struts was identified as a critical, hands down, no questions about it. And policy dictated it should be resolved within 48 hours of the patch dissemination on March 9th of 2017. This patch was not properly applied until the discovery of the breach in July of 2017. It is confirmed to exploitation of this vulnerability was the source of the initial intrusion. In testimony, Payne the ASIS manager was asked to identify employees by the roles listed within the patch management policy. Specifically, business owner, systems owner and application owner. There were no employees designated for those roles. Payne testified and this is one of my favorite things of the entire thing. So he testified that he had no specific role or responsibility to patch the ASIS system stating that he was a manager of managers who managed teams that would fulfill the roles laid out in the policy. So you can see here I have this nice red box. Responsibility? There is none. Everybody can point fingers and say, well these people were supposed to do this but they were supposed to get information from these people and nobody has responsibility at the end of the day. Payne just slides by and gets fired because he didn't forward an email. So the patch management policy required system and application owners to subscribe to the vulnerability distribution bulletins. There were no official designation of who was supposed to be subscribing to those bulletins and receiving information. There was no mechanism for ensuring anyone followed the subscription requirement. The CIO testified that the patch management policy did not work in the case of this vulnerability. Process was in place but no people to conform to it and there was a failure in the technology to ensure the policy was enforced. So executive leaders noticed many issues in the patching process as far back as 2015 when they did a full patch management policy audit. A number of significant deficiencies were identified within the patching process and I ate detailed findings and recommended actions were provided. Not many of these identified issues were ever remediated over a very long time scale. You're talking years. No automated patching tools were implemented to establish redundancies in the patching process. Asset management controls were a need of improvement. As of July 2017 the company did not have a comprehensive software or IT asset inventory. They did not. Employees had previously identified the presence of struts in the ASUS environment because they remediated another struts vulnerability in January of 2017. Can you see like it's crazy that they're doing things but there's all this stomping over from January to March. They somehow forgot that struts was there. Information wasn't reliably tracked and they failed to identify the presence of struts in July of 2017. So just a review real quick. I really do appreciate the patch management audit that they went through. I'm just going to breeze through these eight findings real quick. So the finding and the manage the suggested recommendation the complete buy date. So one vulnerability is not remediated in a timely manner. The recommendation implement automated patching tools and retire legacy systems as quickly as possible. That was supposed to be done by end of year 2016. Equifax lacked adequate asset management procedures. Recommendation improve IT asset management controls completed by mid-year 2017. Three systems were not patched in a timely manner. Recommendation implement and enforce a proactive patching process that was supposed to be completed by end of year 2016. Four vulnerabilities were not adequately tracked prioritized and monitored to ensure timely remediation. An honor system was used to ensure patches were installed. No controls in place such as a patching exception tracker to escalate vulnerabilities not remediated in a timely manner. So recommendation create a controlled patch and exception process to assess prioritized blah, blah, blah. Long term solution target 2017. Five new system and changes to existing systems were not required to be scanned so you should probably scan stuff. They wanted to start doing that. That was supposed to be completed end of year 2015. They never started scanning new systems as they were set up. Six server hardening standards have not been developed for window systems. Recommendation document and publish windows server hardening standards completed by March 31st of 2016. Seven patches were inadequately and inconsistently tested prior to deployment. Recommendation test your patches completed by mid year 2016. Eight patch management policy did not consider the criticality of an it asset when determining the time frame for patch installation. Recommendation review all it assets and classify risk enhance the patch management policy to include more things or more stringent patching requirements for high risk systems completed by end of year 2015. So how many of those did they get done again? This is their stuff. This is an executive that went through and did got a whole team together in 2015 and found all this great information. They just couldn't act on it. So the certificate management process as we discussed earlier, Equifax was aware of a disconnect between policy development and implementation of certificate management. It had no process for updating SSL certs. Security employees uploading the strut signature rule into their IPS noted issues. They had two recommendations of what to fix. And this was coming directly from the IT employees themselves. Number one, define who owns SSL certificates. Number two, create and validate an SSL certification or an SSL certification update process. So this reconfirms that process really matters. I don't understand how they had all this overhead and all this prompt planning and preparation. They just refuse to get process in place and responsibility in line. So an internal vulnerability assessment tracker entry from January 20th of 2017 stated SSL visibility devices are missing certificates limiting visibility. At the time of breach, 324 SSL certificates had expired, 79 of which were for monitoring critical domains. This failure directly led to a loss of visibility into the intrusion on May 13th. So more on ASIS? Yes. So Equifax was fully aware a legacy system from the 80s was being used by millions of customers in a public facing manner. It still failed to prioritize security and modernization. The CSR requested an assessment of the ASIS security concerns and preparation for the August 17th meeting of the Databreach investigation. Six concerns were cataloged. Number one, no segmentation between sun servers and the rest of the Equifax environment. Access to the ASIS network grants access to any other database device or server within the Equifax network globally. A 2015 audit indicated the lack segmentation. Again, we reviewed that. They knew they only needed access to three databases for the application, but far more available. Mandy confirms the segmentation would have restricted the severity of the breach. The CSO and the CSO and the CIO testified that before the breach, they were unaware that ASIS lacked segmentation. Surprise. Two, file integrity monitoring or FIM is not in place for application or web servers. There were no alarms generated when over 30 web shells were up placed within the Equifax network. The CIO testified that before the breach, she was unaware that file integrity monitoring was not implemented in the ASIS environment. Three, sun systems use an NFS. Access to one machine grants access to the entire NFS, including administrator and configuration files. An Equifax vulnerability tracker found as sun server accepts NFS requests from any source. The NFS contains stored application credentials that granted access to sensitive systems outside of the ASIS environment. Four, logs are only retained for 14 days on system and 30 days at a collector. NIST recommends that high impact systems retain logs for between three and 12 months. Studies continue to show that the average time to dwell or time to detect of a security breach is 98 days. The CIO dismissed log retention periods for internet-facing systems. Five, a complete inventory of resources within the application was not maintained. A 2015 audit found that IT inventory did not exist nor did accurate network documentation. A global view of IT infrastructure did not exist. The CIO dismissed the importance of an inventory for the security team. Six, consistent and timely patching of sun systems is a general observation is a concern. Equifax admitted patch management was ineffective. The 2015 patch management audit corroborated this. They recognized that the patching process was not being properly implemented and failed to take timely corrective action. So here's some of my favorite things. Again, these are all pulled directly from the report and this is what I'd like to call my haul of what? So I'm going to review the quote in the beginning. It is critical for organization to know what assets are present within its IT environment and to make accurate and informed risk determination such as when and how to patch a vulnerable system. As the Office of Personal Management's Inspector General warned part of the 2015 OPM data breach failure to maintain an accurate inventory undermines all attempts to securing OPM's information systems. So let's go over some of these quotes. Well, it's not necessarily too short. I think that logs in the retention of them is always and it depends kind of answer. It depends on what they're used for, how much space they take and those kinds of things. So there are various strategies with logs and it's really in my opinion dependent on that environment. Question. Well, depending on this ASUS environment external facing is 14 days on disk and 30 days collected sufficient in terms of and they get cut off. I think it certainly could be sufficient. So are you question? Are you surprised that there's not a complete inventory in this type of environment? Answer. I wouldn't say that I'm surprised. No, not necessarily, but that that would not from a security perspective keep us from doing our job properly. Question. Wouldn't you have to know though that the Apache Struts software was operating in this environment and if you didn't have an inventory you wouldn't know. Answer. Well, we might not know. But again, I don't think that not knowing that would prevent us from doing the right things from a security point of view. It's all in the report. It's amazing. It's it's it's glorious. So the CIO indicated that Equifax was in the process of making ASUS compliant to guess what we heard in the last panel two PCI DSS. This is pretty hard. This isn't something that you just like suddenly go and do, but it's pretty good because it has five main things that you want to worry about. Number one, use of file integrity monitoring to strong access control measures. Three, retaining logs of at least a year with immediate availability of three months. Four, installing and updating patches. Five, maintenance of an updated inventory system. They started the compliance process in August of 2016, but it was not completed in a year by August of 2017. There was a modernization process underway at the time of the breach. This was essentially a rebuild of everything instead of attempts to maintain that legacy ASUS environment built on Sun servers. Their next generation environment was a private cloud. They were going to implement best practices, orchestration, all this good stuff. The project was started in 2015, but by 2017 the ASUS application had not yet been integrated into the replacement environment. The biggest delay was a lack of domain expertise in the legacy environment. Additionally, they were building a consumer care management system to replace the ACS. Development started in 2014 but fell behind as a priority. While risk was a major motivator for the new system, they were primarily concerned with the difficulty and cost of maintaining a legacy environment as well. So let's see. So again, time to dwell for the attackers was May 13th of 2017 to July 30th of 2017. The recommendation of Mandiant and there's a lot on these slides coming up so I'm just going to kind of run through them. Number one, enhanced vulnerability scanning and patch management processes. Two, reduce the scope of sensitive data. Three, increase restrictions and controls for accessing data housed within critical databases. Four, enhance network segmentation. Five, deploy additional web application firewalls. Six, accelerate the deployment of file integrity monitoring. Seven, enforce additional network application and database level logging. Eight, accelerate deployment of a privileged account management solution. Nine, enhance visibility for encrypted traffic by deploying additional inline network traffic decryption capabilities. Ten, deploy additional endpoint detection to response agent technologies. Eleven, deploy additional email protection and monitoring technologies. So how much of that was directly related to the breach that just happened? All that stuff is really good but it's useless in the context of just stopping this specific breach. So the former CEO testified in the progress of recommendation implementation on October 3rd to 2017. The process the progress was vulnerability scanning and patch management processes and procedures were enhanced. Scope with sensitive data retained had been minimized. Restrictions and controls of sensitive data strengthened. Network segmentation increased for internet facing systems. The WAF was deployed, signatures tuned, deployment of file integrity monitoring, implementation of a more thorough application database and systems logging. A 30, 60 and 90 day implementation plan for additional improvements. A firm independent commandee will conduct a top to bottom assessment. Of Equifax's information security stature. Mendiant and Equifax confirmed that all 11 remedial recommendations have been implemented in August of 2018. So September 19th of 2017 to August of 2018 is the time from recommended by Mendiant to implemented by Equifax. So here's another bit of testimony. So answer. So yeah, several of those were underway and were things that we were already working on with the security program. Some of these got accelerated and were able to it looks like get a boost as a result of having Mendiant and additional resources to get those implemented. When you say accelerated, is that as of July 2017 or prior to that? Answer. What I was referring to is after Mendiant came in to assist with the investigation, they were able to add resources to help us get some of these things finished more quickly than we would have done in our own natural timeline. So forced action, there is a consent order from the government placed on Equifax as a result of their investigation into them. So there's so much about to blow through and I'm just going to blow through it real quick because it is you do not want this. It overall addresses the information that is necessary for Equifax but it has something to do like who reviews what, how often in what detail. You want to avoid this. Do not get the government into your business and telling you what to do with your IT because government in cloud definitions that's pretty much my nightmare. A government well maybe except for a governmental order to get an IT asset inventory. You probably don't want that either. So they had 90 days for the following a risk and risk assessment that addresses foreseeable threats to the PII of the company potential damage to the company's business operations. Safeguard for mitigating controls to address each threat and vulnerability. Yeah. So board and management oversight approve a consolidated written information security program and policy that can be updated annually review annual report from management on adequacy of information security program and hence level of detail within board minutes review and approve standard IS policies and ensure they are up to date. Ensure IR procedure guys are up to date and clarify roles and responsibilities of groups involved in incident response. Vendor management monitor management's documentation of efforts to comply with PCI DSS review and approve policy procedure for outsourcing management oversee management's development of a definition of cloud service. Development of policies that provide guidance for the use of cloud-based services for when the use of cloud-based services is permissible. Patch management identify comprehensive IT asset inventory that includes the hardware software and location of assets. Formalize an identification process for patches that need to be installed. Develop an action plan for decommissioning legacy system including compensating controls systems until systems are removed. Formalize patch management policy IT operations ensure the key processes continuity plans are independently reviewed at least annually. Formalize emergency change standards 30 days to improve the oversight of auditing a formal risk analysis process used to set scope and frequency of IT audits an audit schedule prepared on a multi-year basis audit of critical and high-risk areas at least annually. Issues tracking report and issue aging reports submit quarterly to an audit committee. Validation via internal audit the severe issues are resolved in a timely basis. Guidelines for ensuring the internal audit is not involved in daily operations. By July 31 approximately one month from the consent order being issued submit to multi-state regulatory agencies a list of all remediation efforts planned and processed implemented in response to the breach. Written report outlining progress toward complying with each provision of the consent order. These then must be submit every quarter. There's still more. Six months to do formalize a process we're to only identify what patches need to be updated. Popular current metrics prioritizing address outstanding critical high-medium risk patch management audit findings provide programmers job-specific training covering secure coding conclude process of removing systems access of development staff for production environments. Equifacts board will require management to have an independent party test controls relating to all remediation efforts and related multi-state agencies blah, blah, blah. It's you don't want that. It's the worst. It is the most heartbreaking thing to read through that entire government consent order. That's I don't recommend you do. So tally up the damage. Credit card numbers for 209,000 US and Canadian consumers PII of 146.6 million consumers. 225.15 221.5 million dollars in cost in 2018 alone expected to reach 350 million dollars by the end of the year. The incremental cost to transform IT infrastructure to these professional services and costs provide to the consumers. 61.4 million dollars in legal and investigative fees. A total of 74 million dollars was spent on the incident response and breached stuff alone close to two times the yearly budget of security. The CSO claimed the budget for IT security prior to the breach was 38 million dollars. They maintain 125 million dollars in cybersecurity insurance coverage. 7.5 million deductible they recovered about 95 million dollars of that. So overall since September of 2017 to December of 2018 a total of 430.5 million dollars related to the security cost and response to the breach. We also just recently heard about all the class action lawsuits and the settlement money so you can get your pennies because of the 225 dollars. Who would take 125 dollars when you could have credit monitoring? I mean, come on guys. So what can we learn? Don't ignore the past while you're focused on the future. Equifax had invested in a replacement infrastructure for the automated consumer interview system but hadn't properly prioritized it moving to it. This is despite an acknowledgement of the difficulty maintaining legacy environments. Sophisticated attackers do utilize technologies such as encrypted C2. Now again, it's hard to say that's encrypted but this was a very specific group doing this and they were able to use the information in very malicious ways. So I think that knowing that they do use encrypted C2 is important. Forced action really sucks. The mandiant recommendations in the consent order. So invest proactively so you aren't operating reactively. Don't let an IR be your motivation to complete IT projects. It's expensive and painful to work on their time and dime. Mandiant is a very expensive IT company to hire. Effective facts have properly maintained policy procedure and organizational structure. It's likely that the government wouldn't have gotten involved in verifying the remediation was consistent and thorough. Also, control fails cascade and result in extreme severity from a very simple breach. So properly notifying stakeholders of vulnerabilities and maintaining your security investments. Verify they are working correctly. Patch things on time. Keep an asset inventory list. Organizational structure matters. Cooperation interoptability between business groups is critical to healthy security. Anything to provide redundancy and security monitoring or patching notification would have likely restricted the breach severity as well. Cyber security executives do matter and organizations are really adapting to that. A breach is inevitable so be prepared. If you handle information of millions of people probably also be prepared to notify them in the case of a breach. Don't be stuck training thousands of call center employees and building random web apps last minute. Have a plan to quickly triage scope of breach and methods for communicating that information. Probably use a page on your real domain for notification not a real one or not a not a new one like Equifax2017.com or was that EquifaxSecurity2017.com? They relied on capabilities such as SSL decryption and seemingly did no backup accessory monitoring or net flow of any other IDS functionality. Again, they had all the data. They just never implemented MOLOC or any other external system to review the information they had. They relied on one technology alone. So that's it. As you can see here this is one of the best things that come from the entire breach. Equifax was trying to play some crowd control and you can see here Tim never got the memo. Hi, for more information about the product enrollment please visit securityequifax2017.com. Tim, that was the wrong domain. That was a phishing page. They did that for weeks even after the page was identified. So, lots of fun stuff. And that's really it. So, thank you.