 I have the unique pleasure of introducing another long-time friend of mine and a long-time friend of all of us here at Tour. It started the day with a long-time friend and closing out the day with a long-time friend, so it makes a wonderful set of book-ins, makes me really happy. We've got Matrix here talking about data security, how to avoid an embarrassing breach, so please welcome him to the final talk of the day. Alright, so I have basically worked in information security for over 15 years. I've done quite a few interesting things in regard to incident response. I basically have presented Torcon and Torcon Seattle and Derbycon and Layer 1, all the good ones, so essentially I'm glad to have been selected to be the alternate. This talk is eventually going to go ahead and cover data, and a lot of people don't actually think about data as far as how we handle it, how expansive it is, and things like that. So we're going to cover the history of data security, the way we used to do things and why we used to do them, the way that we used to do them, a data security life cycle, data classification, and types of data that actually exist. So questions are usually brought up. I actually get paid quite a lot to go ahead and solve these issues, and how do I solve it? There are elements of data security, there are elements in actually all of information security, which would be people process and technology. So I will also cover the industry standards. We will do some threat modeling, the tabletop exercises, and talk about architecture. So why would we need data security? Essentially, a lot of people have now started thinking about data security due to privacy laws, GDPR. If you guys haven't been paying attention, those are all the emails that you've been spammed. The thing is, it actually does have some meat behind it, and the fact that it wants certain data types actually categorized, cataloged, and able to be essentially dealt with. So if I'm a European citizen, if I want my data deleted, I can go ahead and have that data deleted. If I'm a European citizen and you gave my data to a third party, I can ask who you gave my data to. So in general, knowing what type of data you have and how to catalog it is pretty good, but this is not that talk. So it's kind of bad because we all use acronyms and nobody knows what the hell is going on. As far as these things, here's some real fun ones. DAR, DIM, DLP, UBA, PHI, PII, IP. You're going to know more. DAR, data at rest, DIM, data in motion, DLP, data loss prevention, UBA. This UBA is actually supposedly the new hotness when you go ahead and implement it. It's user behavioral analysis, which uses AI and machine learning, which we know doesn't really work that well. But essentially what it's supposed to do is it's supposed to work in tandem with user intent. So if I'm stealing all your documents, essentially if it's a part of my normal business process and it's a business process that's broken, it will attempt to go ahead and if I stick a USB in there and I'm starting to take the secret formula for Coke. That should never happen. So it's going to actually say, what is this document? What is this user doing with this document? Are they actually supposed to have access to it? And if not, it's going to go ahead and kick off flags and somebody should be able to respond to it. PHI is protected health information. PII is personally identifiable information. And then we have intellectual property IP. So if we go way, way, way, way back, the internet, ARPANET, essentially it was only a concept in 1969. Nobody thought that we were going to have the Clude, the internet, the internet, what's in all that stuff. ARPANET changed to DARPANET and back again. So we could actually go ahead and spread data with interconnectivity to increase research. We kind of also shot ourselves in the foot with that because now everything as far as data moves around. When we ended World War II, we actually had a Cold War, which was a lot of research and development. And since we could actually transfer this data back and forth, we needed to go ahead and figure out how to prevent the leakage of the secret to our missiles, our space program and everything else. If you are a CISSP and you pass the exam, you actually should know when I'm going to go ahead and show. If you got it and you still don't get it, I feel sorry because then it is really a mill. So there are two types of models that existed. So the BEBA model was developed in 1975 specifically to go ahead and prevent corruption of data. Corruption of data is essentially the integrity of it and it is in a read up and write down manner. So essentially you can only create content at or below the integrity level. So think about it as you have the hierarchy of monks and only the top priest can go ahead and write the scriptures and the interns can only read it. They can't write it. So you are protecting the integrity of data by making sure only the highest authority writes the data. The next methodology is basically the Bell-Apidula model in 1972 and these actually go back and forth and contradict one another. So this one is called write up, read down. So you take your average ordinary office worker and they are going to go ahead and build something. And what it is is I have office clerks who get news feeds. They basically write it. And essentially the way that the data is protected is it is only at your level. So think of it as you are a scientist creating something for the space program. You can create the top secret files but you can't create the public files in the sense of you wrote it. A person who has higher clearance than you can go ahead and read down. A general can definitely go ahead and read down so that they actually have the specifics as to the data. But if I am not in a classified area or have the clearance to go ahead and access that data, I cannot actually access the data. So we built machines back in the 80s specifically to have MLS which is multi-level security. We don't actually have machines capable of doing that with this built in. So what that means is my particular box, a mainframe or an AS-400 or something that was built during the Cold War, essentially have different security levels. So the IBM AS-400 which is now the i-Series essentially still has as Cold War controls in it. A lot of people don't turn them on unfortunately. So if I turn a i-Series AS-400 platform into a level 4, it is going to strictly enforce the database so that if you do not have the proper clearance, you actually cannot access the data. So it follows that model, the word, the write up and read down. So whatever level you are, you basically write it, you have access to it and people higher than your level can read it. So the thing of it is these were designed back in the day once again during the Cold War and essentially memory is protected, memory space is protected, databases were segmented. You could have multi-tenant things on there and be assured that unless you had the proper clearance level it wasn't going to happen. And Spectre and Meltdown would not be a thing in this environment. So it's kind of funny because C-Dragon, awesome guy, awesome friend, it's like his Twitter has given up on sanity and have decided I'm going to retire to a nice AS-400 somewhere in New Zealand when it's time to unplug. So in other words he's saying I want to go to a very, very, very secure environment and I want to be segmented and only people who need to have access to me can have that access. So also we had things back in the day as far as data custodian roles. We have gone ahead and pretty much gotten rid of that because it's a cost center. So who here remembers a large green bar paper? Yeah that was the way you used to get things. You didn't get an actual database extract, you didn't get anything. If you were lucky and it was approved, a data custodian would print out all your data and then you would get a big, big green bar sheet and that's the way that it was. And gee there were no breeches back then. So the other thing is, is backups were a part of this role. So Spectre and Meltdown or ransomware, rather ransomware, if you lost all your stuff, this was the guy's job. He basically protects the data. So he validates that backups are done. He validates that people with the proper access levels to the backups get the data back and essentially ensured that any of the data that was stored never went bad and it's correct. We don't do this anymore. We have fast business. Let's throw it in the clued. Give it to everyone. Let's set up Hadoop to throw our data everywhere even though it's unsanitized. We have no idea what that data is. So given the fact that we've moved to the fast business model, it requires that we have all the things. Let's have all the data, all the time, everywhere. We don't care what it is. If I said the GDPR thing, that's kind of going to have to change things for people because you can't put that type of data anywhere and not know where it is. So hey, why don't we go ahead and give an API rights to the entire database? Yeah, that's happened. That's a breach. Let's take a mainframe environment or an AS 400 environment, which is pretty locked down and you don't actually get access to it and just throw it everywhere or even worse. Hey, that public bucket. That's three. It's just data. Let's move it there. That's happened. We now interrupt this talk to ask why the hell would you do that? Why? This would flail his arms. So we brought virtual Viz. Yeah, I think you're cutting out. So what changed? Why did we go ahead and have the advent of breaches? Yeah, we got internet. We got email. Let's go ahead and take the formula to Coke and email it to the marketing team. Great. We got cloud instances like the S3 buckets. We've got laptop computers that could actually be encrypted, but I find it very, very, very highly unlikely because most people don't have the maturity to even have a proper image on a laptop computer. We've got people who store them on their phone and then that goes to iCloud. Then we got Dropbox, Mass Storage, the size of your fingernail, your smallest fingernail, and of course, Internet-ish it. All right. So going back to the way that data should be protected and for all the CISSP's, you already know that they changed it to the AIC triad. It's more politically correct. So you have this triad and it's a triangle. And what you should be thinking about as far as your data is, one, availability. Is it always accessible to me as a business or an individual because I need that data? I need confidentiality. When I have this data, are the right people accessing it? Am I giving it away? Is it in the public cloud? And then you need to have the last triad of integrity. Is my data correct? So going back to the two models that I had previously talked about, you have the confidentiality port, which is the read-up write-down. And then you have the BEBA model for integrity. So essentially, what is the attack vector for confidentiality, exfiltration, and other people who should not be accessing that data? What is the integrity? Well, we can't go ahead and change your FICO score just because we want to. And what is the availability? I got to be able to access my data every time, all the time. So this is a data security life cycle. There are a couple of things that I feel are missing as far as it. So when you work with data, essentially it starts at the create area, and then it's stored, then it's used, and it needs to be shared, and then archived, and most importantly, destroyed. So when I go into a client, the first thing I usually ask them is, okay, I need to see what your standards and policies are. And they usually go these, and they're so poorly written that they don't really have a chance in hell in actually going ahead and fixing it. So the first thing I want to go ahead and do is look at what I can destroy, and that's going to be retention. So once again, you can't exfil what ain't around. If you delete it, yeah, I can't take it. The thing of it is, is we need to go ahead and destroy data and not be afraid to go ahead and delete it. There are regulatory things, like if you're in a bank, you have to keep data for seven years. I'm kind of stuck on that seven years. It's not changeable whatsoever. If I'm in certain government places, I have to keep data for up to 14 years. But you should make sure that when you have this talk, you make sure that you do have a retention policy that will go ahead and ensure that you don't have any issues with retention. I would say your most dangerous person in your organization are data hoarders. Ask Sony how those unencrypted PST files help them a lot, or embarrass them a lot. That was really, really, really bad, and it had nothing to even do with actual business things, personal emails leaked. So data hoarders are dangerous and people can't steal what's deleted. So essentially, let's say I'm an R&D house or I'm designing things and I have intellectual property. I need these old designs to design things better. I can't actually go ahead and destroy them. So this is where we talk about the actual archive area and essentially what do we see people making mistakes on every time they archive them? They don't know what they actually were backing up to begin with. So it is important that you reconsider things as far as basically making it harder for your attacker. So you can go ahead and solve this one of two ways. You can encrypt all the archives or backups or you can use rights management. And essentially, even if that data is X filled, it's pretty useless. So backups aren't even a part of that little circle thing that I actually showed you. And now, even more than ever now, it's going to be much more important because the whole city, the city of Atlanta, essentially lost their whole infrastructure and didn't have any backups. And the backups that they did have were about five years too old, so that was pretty much disastrous. So once again, we need to ensure that backups actually work. And if they go missing on a truck that went to the something mountain place, they should be encrypted or all the good stuff should have rights management. So once again, even if I'm hit with ransomware, I still am operational. At least I can come back within a day or 48 hours worth. I don't know of any of too many organizations who have actually not lost more than two weeks worth of work. So that's something you want to actually look at in your organization. On average, when I've done incident response and we've had to take machines offline, it's usually about a month's worth of data that was behind. Because we can't actually go ahead and take something off production. So things you can go ahead and do to make sure is that you run systems in parallel to do high availability and at least have those warm transfers. Databases are the absolute horrible list ones to try and do because there's just too many transactions. And the only thing I suggest in those things are to actually run two databases in hot swappable HA and all the databases are correct. And you encrypt the hell out of it and essentially monitor the actual tables that you actually understand to be very sensitive. So data classification, we go back to that little circle thing and it's the actual create mode. So which came first chicken or the egg? Well, you created the file. What are you going to go ahead and do as far as deal with it? I suggest that by the time you create a file, you should actually go ahead and classify the data. Reasons are you don't have the excuse of I didn't know what data is dealing with. So I didn't know I wasn't supposed to send the stuff out. Other technologies actually rely on this. So I was talking about UBA, which is the user behavioral analysis as well as data loss prevention. They suck. You cannot actually get any of the data loss prevention or UBA things to actually work. It's not going to happen. Once again, magical AI and machine learning, it just doesn't do it. The other thing is is I can't turn on DLP to actually be an active block mode because there's too much critical things that may go out. So if I block an executive from doing a sales guy from actually doing the million dollar deal because he actually had to send out stuff that we approved. Yeah, I'm probably going to be fired if I block that. But if we actually go ahead and have classification with meta tags, that actual AI and machine learning can actually take the real meta tag and go, this is classified at this person's level. Once again, the read up right down. They are authorized to send it to XYZ. This is a normal part of their business process. I don't care at three o'clock in the morning. It's an email. It's going out. They're cleared to do this type of thing. That's a proper way to actually have a monitor mode and actually have audit. It's better to actually go ahead and block than it is to actually attempt to just do monitor mode. Your sock will never catch it. All that data will be ex-filled. We've done all kinds of interesting things to solve these solutions. We've done, hey, if you see 500 credit cards, block it. Credit cards are super easy to go ahead and do because we have a loon algorithm. What about all the different states' driver's license? A New York driver's license cannot be done via a loon. It doesn't have a pattern. It's all over the damn place. So if I have a thousand New York driver's license go out, they will go out. And I'm going to let it go out. Or I'm going to be blocking everything. It's just not very elegant. So I highly recommend that you look into some type of data classification to actually tag and use metadata for the actual document upon creation. They're very, very, very good ones. So somebody asked me once to go ahead and explain how this all works. So data classification, we have a single file. That little drop, single file, not a problem. Pedabytes, what the fuck? I can't even. So from drop to flood, where the hell does it go? So I walk into an organization. They have EMCs, they have everything. They have pedabytes, hundreds of terabytes of data. They ask me, hey, I need you to find all the PII. Can you tell me where the PII is? I'm like, ah, no. So you have to start somewhere. So to solve these problems, what I actually go ahead and do is I usually implement data classification, start metatagging things. And since I know data is like water, the drops are going to actually follow the rest of the flood. So once I've actually gone ahead and tagged it, I start looking for what I tagged. And if I've found some good stuff, I'm guaranteed to go ahead and find where the rest of it is, instead of actually trying to actually scan the entire petabyte, because I can't do enough distributed computing. And especially if it's a database to go scan a database, that would take it down. I can't do that, because they're going to say, oh, you need to scan the database. You can only do it every window, but that will affect my backups and then you can't do it. So what I like to do is, as I said, make sure you tag it and then just start following it. And you're definitely guaranteed to go ahead and find it. So the benefits of actually doing data classification as an exercise implementing a technology are, one, I know what to go ahead and back up. I know how recent these things are. So if I actually start doing the metadata tags, I can go ahead and say show me all the things that were tagged that have confidential information and start backing that up. I know I need to back that up. It also shows me a date that it was last accessed and last touched. Once again, that discovery thing I was talking about, find me all my data. I just follow the rest of the data. You can then go ahead and consider smarter security architecture in the fact that nobody does defense in depth right because they're trying to defend all things. As I was talking about to help along the machine learning and AI, I have to rely on those meta tags. If I want to do rights management correctly and make sure that I properly lock down particular files that contain things that I need, hey, I can prove that it was done because of data classification and I get a general situational awareness. Now I will say if you do choose a vendor who does data classification, you need to have a vendor that will actually go ahead and watch the state of the file from cradle to grave. So I create a document. It may be an Excel spreadsheet. I may have first names and last names. Nobody cares about that. That's cool. But then I give it to Joe. He adds telephone numbers in there. And then I give it to another guy. Oh, we have their social security numbers in that Excel spreadsheet now. Your particular data classification software should actually start noticing that this document has been changed. It wasn't sensitive. It wasn't confidential. And oh, crap. Now we have SSNs, first names, last names. It needs to re-tag that as actually a sensitive classification. All right. So we have types of data. There's structured data. This is the easiest to deal with. Structured data is essentially all your databases. All your AS-400 systems run databases and actually control the tables. A lot of customers ask me to do the wrong thing. All my good stuff is in the database. I need to go ahead and protect this. Yes, we know you need to go ahead and protect this. But databases were inherently built to be pretty secure in most situations as they've been around forever. And most designs came from the earlier designs that I was talking about unless people actually run SA. So the one that's the hardest, unstructured data, anything that you can create on one of these personal machines. So we've got Hadoop databases. We've got pictures. We've got source code. Source code is damn near impossible to detect. And I would never ever try and even do any data loss prevention at all without using tagging. Because when you use data loss prevention and you actually try and use the algorithm for source code, which is crap, it usually is done on an endpoint. And essentially it causes your developers too much grief and they can't get their work done. But if I have a tag that I can look for on the network, the classified tag, I can actually find that source code real quick. So how do I solve this? I talked about the people, the processes and technology. First thing I usually have to do when I go to a customer is figure out who's in charge of all this stuff. Usually it's been some IT security manager who's been tasked with, well, they told me I have to do this stuff. And I usually go, alright, so who's cutting the check for me to be here? Okay, let's go ahead and talk to those people. Alright, so your boss wanted you to do this. We're going to need more than your boss. We need legal. We need HR. We need everybody. So we start with how much of a budget do you actually have to do this with? And I'm going to have to figure out how to go ahead and phase this. Usually it's going to be you are not mature enough to even monitor things in the sock. Yeah, let's try data classification. It actually usually works pretty well and then the organization in itself becomes aware of the fact that we have more data that was sensitive than we think. Then I've got legal and HR to go ahead and talk about policies and the standards. And then of course enforcement if you actually are stealing from the company. So once again it's all people in an organization that use any of these data protection products, data classification. You have to get them to actively participate. If you get your people who are creating these things to actually tag these things really accurate. My actual classification software can actually start noticing the patterns of what is truly classified. And actually start auto correcting and auto tagging based on the content that you're actually putting in processes. Well, I've got to educate all the users as far as data classification. I've also introduced a brand new tool into their environment to go ahead and tag this. So the effective ones as far as products use plugins. And when you're creating a document it literally makes you the first thing I do before I can even save is what type of document is this? How do I classify this? I would recommend highly that that product that you get forced the person to actually think about what it is. So when you save the document it won't actually save. You then have to go ahead and match all the technology with your processes. So any of these processes the processes are controls. When I was talking about standards and policies. Those should be published on your company internet. Everybody should actually know about that. And there is no excuse to not know what they are. If there is then they need to be referred to it and taught. And one of the other things is the actual enforcement which is I need HR and legal to actually. If somebody is purposely sticking things on USBs and I have proof that they are and they're not supposed to handle that data. You need to go. It can't work for me anymore. So Brian Krebs is not a DLP solution. Don't use him unless you just essentially use things in monitor. So for technology there are things in the stack that you can go ahead and throw in there for data classification and metadata tagging. There's e-discovery tools. Data loss prevention. CASB rights management. Public key infrastructure encryption and database access monitoring and firewall. I am not actually going to go ahead and name vendors because I'm not pitching anybody and I really don't care to do this in a talk. But no matter what I buy none of this shit is going to work if you leave these tools in monitor mode and you don't invest the time to actually tune them. So don't waste your money. Don't waste your time. Don't buy any of this crap if you're not going to actually invest time to actually make this work. All right. So if you actually wanted to go ahead and research this more there is something called the CMMI data maturity model. I do not. I do not recommend this to organizations because you have to be able to crawl before you can walk. But supposedly this data maturity model is what good looks like. If you want to see what it looks like it's definitely a thing to aspire to do. I don't know anybody who hasn't been like PCI QSA not actually go through the correct controls. They just check box. It's sad and it doesn't work. But it is. It looks nice. It's pretty. So what could go wrong. Let's go ahead and switch to other things. I got to love the new stuff. This happened the last presentation PowerPoint essentially changed itself to to not focus and I'm using PDFs for other things. All right. So I had to do this exercise with a place that actually had intellectual property and they didn't actually understand how data security worked. And they didn't understand why it would be of any value to them. In fact some of them were developers and absolutely wanted me to do this in a waterfall thing and I'm like what the hell are you talking about. So I did a waterfall diagram. In any case what I had to go ahead and do is break it down to stupid things. So we were doing the data maturity model. So I had to cover each thing. So you look at the top its risk. What most of us care about being the people that we are in info sec just goes straight to threat model and vector. So the threat model for cloud risk reduction is I'm unable to go ahead and provide legal hold materials because I don't know what data I have. So guess what if I don't know where the data is and somebody puts a legal hold on me. They can say that whole server is on legal hold. I'm so screwed if I have things there that shouldn't be there. I'm also unable to comply with regulatory requirements such as GDPR which is what did you do with my data. Did you delete it. Where did it go. So to go ahead and actually tell the people why this is an important thing how I'm going to go ahead and reduce risk other than use the data maturity model. Essentially if I put in some type of e-discovery based on what I put together on data classification I should be able to have an accurate data inventory of what it is that is on that machine. I will know whether it is in scope and can talk to my legal department and say absolutely not you may not put this on legal hold because it has nothing to do with what we were doing. Data exfiltration. Once again what I don't have on my servers can't be stolen. But GDPR is absolutely funny because they want us to tell you exactly what was stolen in 72 hours in the event of a breach. I can't. There are no companies that I know that even know what assets were hit in 72 hours. Let alone can tell you. Oh this this this over here it contain SSNs and pictures and IP addresses and and yeah it was breached. I've never ever seen anybody who could do that. Ever. So once again a threat model backups that have not been verified. How do I reduce the risk and why do I want to reduce risk. Well if I have good backups essentially I can reduce risk by making sure that ransomware hits. I'm not affected for 48 hours of loss. RM minus RF somebody deleted everything by accident. We're good. That's happened where people just accidentally deleted everything that they were supposed to to actually have left. All right so let's go back to the slide show. So those were the few tabletop exercises you should be thinking about this in ways that you can explain to when you threat model. You should have in mind three different audiences. They're going to be the exact so I need to get my crayons out for. They're going to be the technical people who will go ahead and implement this but they have no reason why they're going to implement this and how that it should be implemented. So if you're doing a good job you should actually have the controls and then you put technology in place to actually abide by those controls. So hackers want crown jewels. How do I protect them? Well I need to know where they are. You need to go ahead and essentially have parameters. So if I know that my databases have all the good stuff and I know that if you file servers have what it is that I want I need to define my defenses around it. Back when I was talking about the way that computers were designed earlier there was a thing called MAC mandatory access control. Essentially that data never ever left ever and you can actually go ahead and download or do some more research about mandatory access control. Those particular computers are still around. And when SELATX was actually put together a long time ago you could actually turn that on. That hadn't been around in quite some time. Another thing that you can go ahead and do is I would say anybody holding sensitive data you could actually go ahead and risk monitoring. I would say the user model, the user. If this guy has access to my sensitive data all the time then if he's failing the phishing internal attacks, yeah straight to VDI to you, you don't get to go ahead and even access anything with important data. But once again that goes back to the classification. I need to know where my good stuff is. Don't be afraid to go ahead and find leaks by using HoneyDocs. I purposely want to know where things are being egressed to. That HoneyDoc essentially gives me a callback and gives me an IP address. Another company is out there and they were very controversial because not only did they do the beacon, they actually did a Natcat callback. You can buy those things but I'm not going to go into that. Encrypt your data, use PKI, make sure all of the things that you put in your stack are not in monitor mode. And essentially if I have to go ahead and transport data from one country to another, I need to ensure that I know what type of risk that data has with it. And essentially if I have to have my data leave the environment and essentially move from one place to another, I need that stuff classified and I need that stuff under rights management or encrypted. So going back to the sensitive data people, yeah you fail the phishing attempt, you get to use VDI and that's all you use. That's basically it and there is no bar here but if you have questions go ahead. Have you worked with any companies on limiting their creation of data? Have I worked with any companies on limiting their creation of data? No I have not. In most cases it's going to be a retention issue that I actually have to start deleting things. I would like to do that but once again that's just not how things work. I would love to say if you don't need to make it, don't make it but we also run things in that fast business pace where it's like first to market and stupidity and yeah I'm going to save everything or hoard everything. So pretty much. So once again data hoarders are my most dangerous adversary to my client or company or things like that. Yes. So that's another thing that I actually blew by and supposedly the only pure way to go ahead and deal with this. So we were taught he asked the question of how do I deal with data that now has become sensitive and could go ahead and reveal something. Let's just say a health thing that I don't want revealed got into that spreadsheet and I can't actually divulge that. How do I make that anonymous? That's through a, the only proper way to do it is using a technology called tokenization. So you have to go ahead and find some type of a technology that will actually assign some other value that cannot be associated with the first value and that's tokenization which a lot of things are being done for GDPR right now. But you have to be very careful because once again keys are created and a legend is created for that tokenization. So if you have the actual master list of all that stuff even though I've pseudo anonymized it by tokenization it's not anonymous. So people make that mistake too. All right I think I'm out of time and yeah come and see me if you want to anytime. And just to be clear there are actually at least one bar and a speakeasy sitting somewhere on camp but metrics can take questions where he wants. Thank you for filling in for us. Again the talk that we had here originally will be Saturday at three on the maker stage. So if you want to catch that it's been moved. It should be already updated on the schedules that are on the monitor. And if you know someone or as an icebreaker tonight introduce yourself and say hey do you own a Subaru from Oregon or a Grey BMW from Washington. They're in the long term parking we need them moved. Don't don't set things on fire if you have cigarettes be respectful and put them out in a proper place the ground. We have the buckets sitting around put them in there and no amplified music at 10 p.m. and have a great time tonight everyone. Thank you for showing up for the first day that's the end of the first is worth the talks. Thank you.