 Good morning. Hello. This is Brian Rowe with Ellis NTAP. Thank you and welcome to a new webinar that we haven't done in the past. This is Privacy, Encryption and Anonymity in the Civil Legal Aid Context. This is the first webinar that we've done in conjunction with the Florida Justice Technology Center. And it's also the first webinar that we've done on GoToWebinar. We're very excited to be hosting this today. We've got some great speakers. I am turning it over now and if you've got any feedback for us, please feel free to email me directly. We will also have a survey at the end of this. This is a new series for us and we look forward to hearing your thoughts from the community. Thank you. Hi guys and I'm Wilnita Negran. I'm from the Data and Society Research Institute and as well as working with the Florida Justice Technology Center. The training today is a collaborative effort between ACLU Access Now and Just Tech. And we have speakers from each of these organizations that are going to be talking about some element of... Just to begin with, we have Amy Stepanovich from the Access Now, Jay Stanley from the ACLU Speech and Privacy and Tech Project, Joseph Mello and Mike Hernandez from Just Tech. And myself, again, Wilnita from the Data and Society Research Center and Florida Justice Technology Center. As a general question, is cybersecurity the next digital divide? This is certainly an issue that's being discussed at many levels federally as well. If you look at the history of digital divide just very generally, the first digital divide at least that concerned the civil justice community was very much about computing and access. And when we look at tech driven access to justice interventions, it became very much about as our clients and our staff are using more technology, how can we use that as an opportunity to be able to deal with the access to justice gap through technology. So it's very much an issue of who has access and who doesn't and how can we increase access. The next digital divide concerns once technology has become... We've become accustomed to it and more and more people have, as the statistics are going around, more and more people have access to mobile phones, even low income people have access to tablets and other kind of technologies. The next digital divide concerns are understanding and application of these technologies in our lives and getting a wider understanding of the consequences, unintended consequences of the greater integration of technology both in our clients lives and looking at our staff in our offices. And two things about that is that it's when you look at this wider cybersecurity issue and how technologies are applied, it considers a wide ecosystem of stakeholders and responsible parties that have a responsibility towards this. From users, developers, to your staff, to your clients, everybody, there's a large ecosystem that's responsible for shaping our understanding and application of these technologies. And so everyone plays a role in this next stage. I just wanted to talk a little bit about the rise of what I call, it depends view. I know that myself and probably yourself, when you look at the security issues that may arise through you using more technology, Pew Research Center has done a few studies where they've found out that across ages and income and education level and gender, that most people in America see privacy issues are very contingent and context dependent. So as far as their views of their privacy, they feel like it depends. I'm willing to give my information away, but it depends on various, very particular kind of situations. And then a second theme to that is that there's a sense of uncertainty and resignation, kind of like a powerlessness that people have. It's something that they see as inevitable. Everybody's going to ask about my data. Everybody is going to have access to my data. Someone eventually is going to get access to my data and do something wrong with it. People are increasingly seeing it as like this is really annoying, but it's part of modern life. And I think that's when we look at developing technologies for our clients, it's a really important thing to just always keep in the back of our minds. And me as someone that works with data, I kind of have to tone in my excitement sometimes because I want to immerse myself in data driven strategies and innovations, but I have to be aware, cognizant of the fact that at the same time there's a lot of anxiety across again income and education levels that people are just generally having. They're aware that the things are changing. They're aware that their data is being collected. They're pretty savvy about what could happen. There's a lot of like fear stories. Obviously a lot of people that take it on very dark paths and then there's people that are more realistic. So it's important to keep that in mind that there is anxiety overall as far as like increasing privacy issues and data and requests for data use. So just real quickly, one of the most unsettling privacy issues noticed was that how hard, and I think a lot of us have heard of this, is how hard it is to get information about what's collected and a lot for us. I know that a lot of, I'm very familiar with the A2Js and a lot of my own technology that I use through Florida Justice Technology Center. We always have a disclaimer, but we have to look at how inaccessible those disclaimers even are and how little people feel like even if the information is there, it's hard to access exactly what all of that is. So providing that information in layman terms is something to keep in the back of our minds. And then there's an awareness and trend towards surveillance and data capture that people see as inevitable. And then there's hopeful people that technology and legal solutions can be found. Privacy issues become more and more prevalent. People are hopeful that the key stakeholders will come together and will put the proper protective measures in place. So, I mean, looking at our own community, some of the challenges are obviously that we deal with populations that just getting into technology. There might be digital illiteracy issues, LEP communities, especially there's not a lot of information on data. Security stuff is not always readily available in multiple languages. And so there's a lot of, there's sometimes an elderly that are increasingly using mobile phones and not being as savvy as far as looking at for key issues. The other thing is that I'll talk about a little bit more is that the experience of all these stakeholders is very siloed. Basically the people that are in charge of the federal policy, people in charge of the third party policies, you guys on the ground on the grass roots. Everybody has very different anxieties and feelings about privacy issues. And all of this needs to kind of be bridged together. The grassroots stories need to come up to the federal level. Data security issues creep into our everyday technology issues. Things like the mobile phone usage of our clients and staff. A lot of them know that device is not always a problem. It's a network that you connect to. Issues of mobile malware. It recently came out that Android phones are more susceptible to mobile malware than iPhones. You know, so people are just kind of using your staff and your clients are using these technologies and not always aware of what they could be susceptible to. Things like website analytics. I know that I'm guilty of using Google Analytics for civil justice websites. A big question that a lot of us have not always asked is that, you know, there's a reason why Google provides this analytics software to anyone for free. It gives them access to a lot of the backend user habits of websites all over the world. And that brings its own set. Those are security issues that not all of us may be aware of as we're using Google Analytics. And the checklist, by the way, does list an alternative to Google Analytics. It's called PICA. It'll be an infographic. And a lot of us are finding ourselves increasingly working with third party vendors for technology and development. Things like expert systems, triage portals, apps, SMS text messaging. A lot of that talent is not in-house in the civil justice community. So we're increasingly looking outside. And so there might be ethical conflicts of interest when the civil justice community is increasingly having to deal with for-profit third party vendors for various technology development. And then another kind of, another one that people may not realize is that libraries' access to justice partners is a big one as far as helping to bridge, helping to increase our clients' access to information. But sometimes when we send people to public computers at libraries, we may not prepare them or bring them aware with some of the ways that they can protect themselves when they're using public computers at library to access information about their case. And then there's sharing of documents among your staff. There might be security gaps going on with that. And there's so many other ones. These are just an example of kind of like the daily security gaps that happen in our work. Looking at the ecosystem and kind of understanding, again, these all these sea-load experiences that exist, all the key stakeholders and trying to bridge all of them into one powerful advocacy voice. The ultimately privacy and cybersecurity line the intersection of legal aid policies, your programs policies, third party policies, local, state, and federal laws. The social norms and the habits of your staff and your clients. That is all related. That's the privacy and security ecosystem. And that kind of framework comes from data and society research institute. They had a really interesting way of kind of dividing all of the stakeholders. And everybody has a role in every stakeholder has a role in advocacy related to privacy and security. When looking at third party policies, you can pressure people about the terms of their service to protect clients and your staff. Technology, obviously developers can be pushed to incorporate more privacy protection software and protocols. We can all participate in regulatory debates. Legal aid programs can create their own in-house policies that take into account all of the various server and technology issues that go in their office and their staff's use of the technology and their day-to-day work. And then we can empower our clients finally to change their expectations through education awareness of what they should expect as they're using these technologies increasingly, raise their expectations and awareness of what their rights are. So this is an ecosystem and anything that we can do to keep this larger framework in mind and always look at how to bridge them. It would be really powerful because this kind of privacy and security issue will only intensify in the years to come. And just as a side note, working with data and society research institute, a lot of what we're trying to do is trying to partner with grassroots groups, a lot of people on the civil justice and the criminal justice group to better understand how these technologies are affecting clients and staff at that grassroots level. So if anything that I can say is that to please reach out and share your story. If there's things that you're seeing, if there's third-party policies that are in conflict with your clients' rights, if there's certain laws that you want to be able to maybe have a voice in and it's going on federally, you want to shine the light, the access to justice element to some of these privacy laws, if there's security plans that you're developing in your staff and you're not quite sure you're capturing everything, please reach out to folks working more in the broader sector because there's definitely a very big interest in kind of partnering, trying to bridge all of those experiences into one voice and partnering with grassroots groups. And now we're going to pass it on to Amy from... Hi, everybody. Thanks for joining us today and for listening to me talk for a little while. My name is Amy Stepanovich. I am a lawyer but the necessary caveat is I'm not giving legal advice. I work at AccessNow as a U.S. policy manager, which means I help with development and implementation and communication of policies within our mission. And so AccessNow is a global organization that seeks to extend and defend the digital rights of users at risk, with a really significant focus on the user at risk. We work from six different offices and several satellite presences all around the world and we have a three-pronged approach that combines policy, advocacy and technology. As part of that work, we operate a 24-hour day, seven-day week digital security helpline for activists and journalists under threat, including users at risk within the United States. So that is a tool that might be made available to certain clients that you work with. And feel free to contact me if you would like more questions about that. A lot of the advice we give involves how to properly use encryption and different tools and implementations that protect users. I like to start because I'm somebody who works in privacy and I get blamed a lot for being a little bit of a lead-eye, but I actually think technology is super cool. We carry around supercomputers in our pockets that are capable of accessing the wealth of human information with just a few quick taps. We also have devices, new things, internet things that measure our physical activity. They track our sleep patterns. They tell us when we need to buy milk and it's really kind of a pretty awesome world that we live in. But all of the technology that we're carrying around and that we're using creates huge risks. So basically we are bleeding information with every step that we take. Every new device that we build, buy and carry creates new means from which a person can have their most personal information compromised and exploited. Most of this information is what we would call metadata, a word you may have heard quite frequently over the last few years. A fairly innocuous term for a lot of the really revealing information about an individual. Metadata created and collected about you includes where you go and when, everything you buy, the websites you visit, the people you talk to and for how long, the organizations you pay dues to, the prescriptions you fill, and a lot more. And this information may be kept on a user's device or fed into a central database to be stored or analyzed. Some of it may eventually have positive returns such as showing us where diseases start, for example. But these databases of information are attack surfaces or what we would call attack surfaces for unauthorized actors looking to access them, either to make a profit or to make a point or simply to entertain themselves. For the average user, this creates risk. For certain sensitive users, these risks are greatly compounded. And it is often these sensitive users about whom the most data is sought out. The reasons for seeking out data on these people aren't typically malicious. In fact, it's mostly good-hearted people who want to evaluate weaknesses in the parole system, cases of abuse of foster children or incidents of threats to civil rights activists. And I've personally listened as people have talked about how they want to collect and use data in order to address these big societal problems, as well as small problems. You know, we heard a little bit about analytics and making sure that we're communicating properly and collecting data in order to do that. What this all fails to take into account is how the collection impacts the subject population and the threats of creating big databases of the names of people in the population who are already under threat. To deal with threats of data that is being collected, we have to start by asking ourselves some questions right up front and analyzing if we should be collecting the data in the first place. Questions like how to adequately inform users about the collection of data in various ways to provide redress to those same users. In order to think through the ways to do this, the National Institute of Standards and Technologies or NIST because in Washington, D.C., we love acronyms and everything becomes an acronym. NIST is developing a privacy risk assessment process. The framework is currently only in draft form, but when it's finished and the draft is available publicly and in your materials, it's expected to provide a meaningful way to evaluate the risks that arise from the processing of data, including collection. The process of the assessment currently has four non-concurrent phases. Framing the risk, assessing the risk, responding to the risk, and monitoring the risk on an ongoing basis. The other questions we have to consider is one of security post-collection. Now, these notions of privacy and security, despite what you may have heard, actually work hand in hand. In fact, NIST's process for analyzing cybersecurity risk actually directly fed into its privacy risk assessment process. Most companies have had some false starts, companies or organizations in proper cybersecurity investment. The costs of security can be very high and the benefits are hard to pinpoint, at least until the efforts have failed and all of a sudden you're dealing with a data breach that can be very expensive. But until that fail point, it's really hard to see how you're getting benefit for the money that you're putting into security. I want to note that there is no federal law currently that mandates any certain level of cybersecurity that organizations have to provide to their clients. Though there are state laws setting up mandatory responses to data incidents, which I'll discuss later. However, the Federal Trade Commission stepped in as an enforcement mechanism when a company fails to provide minimum security. So in the case of WINDOM, which was heatedly argued, the FTC, the Federal Trade Commission, brought charges against the company, against WINDOM, for allegedly unfairly exposing the financial information of users to hackers. The company eventually stipulated to a settlement which requires a comprehensive information security program for financial data and annual information security audits. The case demonstrates that in lieu of a law, companies can still not totally forsake cybersecurity obligations to users. For people who are doing business with other companies, it also provides a means for redress by filing complaints with the Federal Trade Commission about companies that you believe are not living up to their security obligations. In terms of both privacy and cybersecurity, we believe encryption is actually a core way to protect sensitive data. It promotes integrity and confidentiality, two of the three core tenets of security. And also, according to the UN Special Rapporteur for Freedom of Expression, is key to the exercise of human rights, something that's very important to access now. Encryption happens on several different levels of the internet's infrastructure, and you should take all of them into account. Encryption protects your devices where you draft a communication, protects the communication and trends it to its recipient, and when it's stored on a company server, it protects that communication at rest. Access now encourages organizations to think through all of these different aspects of security through our Encrypt All the Things project at EncryptAllTheThings.net. There we have what we are calling the Data Security Action Plan, where we give seven items from authentication to applying security patches that an organization should incorporate into its security thought process to ensure that it's adequately responding to security threats. While no state nor the federal government has laws limiting the development, sale, or use of encryption, I do want to highlight that there are some threats to its use in other jurisdictions, including an as-yet-unenforced ban on encryption in Colombia, a limited ban in China, and mandatory user side backdoors in Kazakhstan. The reason I bring this up is because in a digital world, which is unencumbered by geographic borders, advocates should be aware of the legal provisions and their application, because many apply extraterritorially. I want to go back to something Wilhelmina said at the beginning on the digital divide. This fight that's happening in governments around the world on encryption is particularly important because if we start going down a path where default encryption, default strong encryption, is prohibited or banned, then what we're going to see is really a class system emerge on the internet where certain people who have the knowledge to go after secure tools that are available or build their own tools will be able to protect themselves from bad actors or from hackers, and the people who rely on the security defaults that are available to them off the shelf will be very, very much at risk. And so we're trying to prevent that by making sure that companies can provide that default security and don't have these legal loopholes that they have to jump through. So I want to talk about problems that this entire approach comes with, and specifically three problems. And the first is the who of the risk assessment process. For both the privacy assessment and cybersecurity assessment, the risk is assessed as against the organization. But the entity, the person who has the data, about whom the data concerns, actually serves to bear the brunt of the loss in most cases. However, right now we have no adequate calculation to determine the risk to the user in different cases, which would be necessary to incorporate into the overall calculus. So for example, the company doesn't necessarily have any huge obligations if a database full of photographs, personal photographs because becomes compromised. But that photograph from a user could have really significant implications. So trouble even more troubling than the fact that there's no way to measure that risk against the user is that there's really little incentive to develop the calculation to measure that. And so this means by necessity that all of the assessments are by their very nature, very incomplete pictures of the risks that are faced. On the second and related problem is that of data breach notification. I mentioned before there's no federal law on data breach notification, but there is some provision in all 50 US states on the issue. Overwhelmingly, however, these laws only relate to the breach of financial data. So in some states now health data are included and in even fewer states certain types of biometric information are included. What isn't included are those personal photos I talked about or information that doesn't link directly to monetary harm. Though arguably that loss is much more acutely felt by the user who doesn't necessarily have a known means for redress. The final problem I want to bring up is there is a risk caused by relying on 30 party systems, which is an increasingly common business model and something that most of the people in this webinar are aware of. Even in a company that is totally on top of security, exploits can be found with the holes left by other parties and exploited to the same ends. When deciding to enter into business with a company either through directly contracting with them or just by picking up a off-the-shelf product from the internet to deal with, it's really more important than ever to understand their data and their security practices. And not only those that are written down, not only reading their privacy policy, but actually looking at their day-to-day practices and trying to figure out, for example, how often their systems are audited, frequently in depth of security fire drills, and the training provided to employees on how to detect and respond to social engineering attacks. These are the things that are going to give you that whole picture of the emphasis that a company places on security and the care with which it will handle your data and the data of your users. So quick question here. On the data breach notification, how often are civil legal aid organizations covered entities under things like HIPAA? I can't say I'm an expert on things like HIPAA. The data breach notification laws, though, typically are separate. They're not under the federal provisions. They're separate state laws, and they will very likely apply to all of the legal aid organizations. Okay. The medical legal partnership organizations in civil legal aid are particularly interested in that question, but... There is a wonderful expert who I would point people to. Dr. Deborah Peel at Patient Privacy Rights, who has worked extensively on issues surrounding HIPAA, and is a wonderful resource for people who are interested in that specific topic, and I would point you to her research and her website. Excellent. I'll get her information from you, and we'll add it to the blog post after this. Wonderful. So the final slide really is just a wrap up, and so four, I think key takeaways from this is to start by conducting a privacy assessment before deciding to collect and process data, particularly the data of users at risk. Second, you can't protect all data perfectly. Perfect security doesn't exist. You probably can't even protect some of the data perfectly. You need to apportion your resources to respond to threats, and when dealing with sensitive data, you really should be conducting a cybersecurity assessment and taking action to protect your users. Do your best to take their risk into account, and not only your own. Third, you will use third parties to assist in your processing of data. Don't just read their policies. Do your due diligence and find out if they protect their data the way you want them to protect your data. And finally, federal laws are lagging on many of these issues, but you need to stay up to date on state laws and international laws, as well as agency enforcement actions, because there are a lot of different levels that this type of action is happening in, and they might impact you as you work on an internet level, which is, as I said before, most really few geographic boundaries. And that's what I have for you today. I've got a quick question come through here, which this one's targeted at Amy, and this is from Terry Ross. I don't understand who is a user at risk. The phrase is being used. Are all users at risk? Are there particular populations that are more likely to be at risk? How should we be looking at users as a demographic and their different risks? Sure. It's not necessarily a defined term the way a lot of people in the legal profession want to define term. We look at the risk that people face from government. So, for example, for us, which is an international organization, a user at risk could be a member of the LGBT community who works in the MENA region, in the Middle East and North Africa, where it's illegal for them to be homosexual. So that would be a user at risk. In the United States, a member of the Civil Rights Movement could be a user at risk who is particularly under threat from law enforcement and may be a target for law enforcement. And so it is a term that depends on where you are geographically and what the specific threats are that users face in an area. I mentioned, for example, right up front, foster children or parolees, populations that we consider users at risk, users that face particular stigma or other social problems. Yeah, and to add to that, I work a lot with the consumer rights organizations and seniors are often targeted in several of these or several scams or other ways to try to get their personal data. Being aware of the digital literacy of particular groups and how to tailor your services so that they're not unduly exposed is important. I wouldn't use all users as a broad term. I would divide out several different kind of typical stakeholders and see what you need to do to craft policies for each of them. This is Wilnita. I wanted to add that there was actually a conversation that I had with the program where going into how to define the user at risk, if you're looking at developing a technology or a resource that would be for an LGBT community that's in a conservative state, they felt like that would be the particular state context mixed with a vulnerable population created in their mind a very specific user risk. So that just gives you an example of how kind of the state context, the local context, along with this broader federal one, you can identify what are the particular risks that different populations would have. Are you in an anti-immigration state? Are you in an anti-LGBT state? Then those are particular kinds of, if you are and you want to develop technologies for these vulnerable groups, you definitely want to look very particularly at the kind of data you're collecting from immigrants, undocumented immigrants from the LGBT community for your technologies and just make sure you take added precautions if your state context is one where these groups would face a lot of backlash. So we've got a really good comment here from Ken, which is user at risk is a contextual universe. All of this is continuing on the risk assessment that you put together. And there was a big question here, which is how do we find an expert who is familiar with our particular state laws on privacy? What is the best next step for a legal services organization that wants to look into what their own laws are and figure out where to go practically? Data and Society Research Institute is actually a network of technologists, security experts nationally and internationally. So if there's people that are, they may have somebody in their network in that given state that may be able to work with you specifically on identifying people in your state. So I would, and maybe when Brian does the write up blog post, you can put that down as an organization that you can follow up with because it's such a network group of security experts throughout the country that as a resource in finding assistance for your particular state issue. Because that's a really good question. I would also look towards your local information schools and law schools. For example, in Washington, University of Washington has a tech policy center that will help connect you with privacy researchers or other individuals that may be willing to do some pro bono work to survey your state laws and give you kind of an overview of what would apply to you. I know that several of the information schools kind of the information management schools have a either adjunct or full time professor that specializes in that particular area. And they're happy to try to work with students to do kind of an assessment with you as part of a project for those students for a capstone or something. Hi everyone, my name is Michael Hernandez and I work for just tech. We're a technology firm that specializes in it needs for the legal services community on the phone as well as Joseph mellow my colleague, Joseph and I previously worked at legal services NYC and know a lot of people from our many, many, many years working there and we manage the IT systems network and support desk while we were there, just to name a few of the hats that we wore while we were there. Thanks Mike. So hi everyone, my name is Joseph mellow and to start I'm going to be talking about sharing documents and data. So emails used every day to communicate with people. It's easy to use it's accessible and it's probably something you can't live without at this point. However, when dealing with sensitive information, there are some issues, namely that emails are easily forwarded. And in lengthy emails, the subject line can change the subject matter can change as the email grows and new people get added into the email thread and that email that started out with one person can end up on a distribution list. In other words, there's very little control once that email goes out. The data now lives in a few places as well. Your mailbox, their mailbox, possibly other mailboxes, hopefully organization has a backup policy in place and if so the email containing the sensitive data is now living in there too. So if emails are passed around there may be issues with things like e-discovery and or complying with your organization's retention and destruction policies. And then there's of course issues with, say for instance, if you have a password policy and it's not very strong, your password is very weak, your email account and the recipient's email account can be broken into. And then finally, of course, email isn't encrypted as it travels through the internet to get to its destination. So that's a danger as well. What can you do, which is the next slide. There are a few options to choose from. There are solutions out there like FTP and Secure FTP, which hopefully are using Secure FTP as opposed to regular FTP. Then there's also in-house solutions like Globalscape that can be used to manage file transfers and use collaboration, security and compliance. Then there are also third-party solutions like Dropbox Enterprise and other several that contain several features that can be managed in the cloud. Which is helpful if you have a limited IT infrastructure or skill set needed to manage the in-house solutions. So let me try to track here for a moment actually and focus on the attention for the word enterprise that I mentioned earlier. There are a lot of tech-savvy users out there, which is great. It certainly makes my life easier. But one thing you have to look out for on the cloud-based personal account. And oftentimes there are certain IT needs that staff have. And if the service isn't available in your organization, it's available for free in the cloud in less than five minutes. The issue comes with who has control over that data. A staff member can create a personal account, upload sensitive data to it, and no one in the organization may be aware of it. There's a potential for data leakage. So speaking from a manager level, you may want to consider a policy that limits where data can be stored and looking for a solution that can be centrally managed by the organization. The products I mentioned earlier have a lot of bells and whistles. So if you're looking for something more straightforward, Ironbox is a good solution. It's a cloud-based file transfer service for sending and receiving files. I think it's handy to have and you can use it with your clients as well in case they need to send you any confidential data and documents. Another way to work with files is using document management system, also called content management systems. If you're already using something like Office 365, then you may have access to SharePoint Online. You may have access to SharePoint Online and Box is another example of a document management system. The data lives in a controlled environment and you share out links and invite people to access your data, whereby you can limit control that they have. In the example of SharePoint Online with Office 365, you can further limit the controls people have outside your organization with information rights management. You can do things like disabling the copying of text, preventing saving a local copy, prevent printing, and even expire access to the file after a set number of days. Okay, so let's move on to staff member-owned devices. Just to be clear here, I'm not really talking about BYOD bringing your own devices. I'm talking about your staff working from home. And throughout my years working in the legal services community, I've learned that staff are dedicated to helping their clients. I used to see support tickets at two in the morning on a Saturday night with a user needing assistance. And they'd be even more shocked when my colleague and myself would be responding five minutes later. So if staff are working from home, are they using a personal device or a work purchase device? A device purchased by the organization, if configured properly, can be managed remotely, have centrally managed antivirus solution installed, lock down to prevent a user from making drastic changes to the machine. A personally old device is a different story. Your IT department has no idea what's installed on that computer, who has access to it, whether it's your kids, a friend, a spouse that has admin rights on the machine. IT doesn't even know that your computer exists. A staff member can start a document on his or her own personal laptop, and with that client information, just save it to their desktop or my documents folder. That laptop could be lost or stolen. It may not even have encryption enabled on it, or perhaps the device is being retired and simply thrown out or recycled. If so, what about the hard drives? So there's a couple of ways to address this. One is of course, as you guessed it, if you have to have a policy in place that dictates where work-related data can be stored. But does that stop your employee from saving the file to a flash drive and taking it home? No, it doesn't. But this is the first step. You can provide your users with company-owned computers and smartphones. How do your IT manage and maintain the company-owned devices can help prevent users from working on a neglected PC with no Windows updates or outdated antivirus software, or no antivirus software installed at all. If you have the budget for it, I think you should consider purchasing computers for your staff or devices for your staff. There's also mobile device management as well, which can be used for mobile phones, tablets, PCs, and laptops. It can come with a lot of features, and like most of these topics we're discussing today is a whole separate presentation on its own. So here's something to consider. Does your staff have a secure method or connection to work from home? I mentioned flash drives earlier. Although that gets the data from A to B, it may not be encrypted and it could be lost. So depending on your organization and its size, its IT infrastructure, there are connection methods like VPN services, which will require some expertise but is a good secure solution. Remote desktop services is built into Microsoft Windows Server. And as a non-profit, you benefit from non-profit pricing from Microsoft to buy the licensing needed to run a remote desktop environment. There are also third-party solutions like LogMion that can be used to connect directly to a workstation. But like I said, as few slides back, be sure to pick a solution that can be centrally managed. LogMion Central can be used to create managed accounts for your staff. There are solutions out there like Chrome Remote Desktop, which is free and which allows for secure connection to a desktop, but there's no management capabilities. You may dismiss an employee and disable their Active Directory user account, but they may still have connection to their former workstation using their Chrome Remote Desktop, for instance. And I don't need to tell you how often users don't log off their workstations at the end of the day, which, by the way, another policy, users shouldn't save their work and log off at the end of the day. Concerning the destruction of data on hard drives, there are some free utilities that staff can use to wipe hard drives, assuming that it's functional, it is. Your organization's IT department can recommend software, write instructions on where to download it, and how to use it. You can call it a tech tip. It's something that Michael and I used to do back at Legal Services NYC and people would love them. If hard drive isn't functioning properly, there is still a risk that the data could be retrieved, and then there's companies that destroy hard drives for you as a service. Your organization, more than likely, already has a paper shredding service. They may do hard drive destruction as well. Perhaps it's something that could be offered to staff. So just be sure to get a certificate from the vendor, ensuring that the hard drive has in fact been destroyed. So one last thing to touch on before we move on is paper files. Sometimes your staff has a court date early the next morning and they take files home with them the night before. Paper documents can get misplaced, left on a train, etc. Which is, again, going back to more policy. Having a procedure in place that lays out the steps that should be taking in this event. Who should be contacted? Is it going to be the supervisor, the client, certain upper management personnel? It's better to have this all figured out before it actually happens. Next slide, please. Alright, I'd like to focus on clients at this point. So there are situations where clients don't have their own computer. They don't have home internet service. So I heard this report that low income individuals tend to have smartphones, which act as their computer making it their only line to the internet. If so, they could be using 4G service or Wi-Fi services. So there's free Wi-Fi connections available just about everywhere now. Coffee shops, libraries, airports, subways. Is it safe? Well, if your client is visiting, say for instance, lawhelp.org to better understand the legal issue and they aren't inputting any confidential information in, the answer may be yes. And I say may be because there are exploits and vulnerabilities with various operating systems that are discovered every day and methods to connect to devices via Bluetooth that go well beyond the scope of this presentation. Generally speaking, your client isn't intentionally sharing any private information when reading a news article or checking the weather. Well, what if they do need to send you a secure document? Some clients will probably visit your office directly and hand you the document directly to staff, to yourself, etc. But if your client is tech savvy enough, they may email you from their computer. And that's where solutions like Ironbox come into play. And if you remember, Ironbox is a secure file transfer service. If you had an account with Ironbox, you could send your client a link via the Ironbox service that sends them a secure page where they can upload the document. That's better than your client emailing you directly or your client accidentally typing in the wrong email address and having it sent to someone else with a similar email address. Concerning computers, though, your client may not have their own computer. They may visit a friend's house or a public library. From an IT perspective, I'm concerned about security of those computers. Personally speaking, since I'm not managing those computers, I wouldn't do anything confidential or sensitive on them. For example, a hardware keylogger that records all the keystrokes on the keyboard could be attached to that computer. And the antivirus software that may be installed on there will never know about it. However, I shouldn't be so negative here. Practical perspective. We can't tell clients just not to use public libraries. In fact, a lot of them, that's their only way to access and even communicate with us. What do we do on a practical side? Well, that was about to go into it. So essentially, your clients may not have a choice. The reality is that's what they have. They have the internet. They can only use the third-party computers. In which case, you may want to consider creating something for your client, sort of know your rights, but focusing on good security practices, some things that may be included are like private browsing, not saving passwords in the browser, not reusing passwords on multiple websites when they have accounts created there, or logging off their session when they're finished. And the truth is that a lot of these places that have public computers, like internet cafes and community centers, they want you to use their computers. They have a huge interest in patrolling monitoring and stopping security threats in their space. So if clients are using their own smartphone, they can download things like an antivirus app from Norden or ESET, which can help monitor their phones for suspicious activity. There are also security implementations that jailbroken phones or unsigned apps, which are essentially apps that may ask for way too much in terms of privileges of being used on your phone. So there's things to consider, maybe things that clients should be educated on on sort of how to use their computer and how to use their phones. So I've been talking about it throughout my entire presentation here, and that's right, it's policies. So working at Just Tech, I've gotten a chance to work with a lot of nonprofit organizations, both large and small, and not many of them have policies in place. And if they do, it's 20 years out of date, and no one knows what's in the policies, staff doesn't even know that it exists, and no one's checking for them for compliance. Policies you should have in your environment are acceptable use policy, data retention policy, destruction policy, data storage policy, just to name a few. Just Tech is in the process of testing out an online service that helps create IT policies. This solution has several templates and act as a document assembly software. That is, the hope is of course that we make it easy enough to create a policy that it gets implemented, but at least there will be one less hurdle to deal with. I have a link at the end of the slide that will take you to that website if you're curious about learning more about it. So there's a question here about model policies. So the website is kind of a walkthrough document assembly type service. Do you have any stock policies that you'd recommend, or do you think that this really needs to be customized more using stock policies has some issues, or what are your thoughts there? Well, I think if you don't have any policies whatsoever, then I think you should at least, at the very least, look at the stock policies and see if it covers what you have. And every organization is going to be slightly different, and policies change every day, so it's hard to sort of name a particular software and service which you should avoid. You should try to be as generalized as possible to cover all the information so that the policy can live on for more than a year or two years or five years. So my suggestion to you is you could take a look at the website and see what policies they have in place, read them over, and see if they make sense for you. I myself, we haven't fully sort of used those policies. I kind of want to go through it myself before I sort of recommend it out to everyone, but I want you to know that it exists. Yeah, one other good resource there is the LS&TAP email list. If you ask, individuals usually are willing to share current policies that they do have, and these policies evolve so if there's a policy that you think is really good that you want to share with the community, we're also willing to post it to our website publicly and share it. Great, that's actually an excellent resource. I'm also a member of the LS&TAP listserv, so it's certainly been talked about in there, and if you aren't a member of the listserv, you should definitely join it. So the last thing I want to talk about is, and I'm just throwing this out here, is caller identity. This is meant as really food for thought, and it may not make sense in your organization and you may not think it's necessary, but here's what I'm thinking. I just recently bought a house, and maybe there's some callers here today that can share my pain and know the struggles if I can give you a hug, I would. When working with my mortgage company, every time I got a phone call from them, I went through the same process. I had to provide my full name, my home address, and the last four digits of my social security number, and that was the same process whether I called in or whether I was receiving the call. And so that made me think about calls with legal services clients. Is there a danger when calling a client that you may have accidentally called the wrong number, or that you've called the right number but you aren't actually speaking to your client? Is there a chance for fraud or releasing confidential information generally? I'm not an attorney and I don't deal with legal cases, so I don't know for sure if there is a real need for caller identity, but if there is, perhaps you can use information that you already have for verification purposes, like the social security number if you have that recorded in your case management system, the client's date of birth, or if possible if you and your client can agree to open a secret question and answer. It's an added step, but I think it's worth the extra layer of security. Next slide, please. And the last thing is just me and Michael, our contact information in case you need to ask us any questions about the topics we've discussed feel free to give us a call, shoot us an email, or if you'd like to learn more about us you can visit our website justtech.com. So anyone have any questions for me? I've got a longer comment here, which is just a comment. If you take an internal security measures implementing geofencing for mobile devices, implement encryption for email, what happens when an advanced data minehacker using something like Wireshark captures packets in a public environment? At what point would your compliance with policies incorporate multi-factor authentication and be off the hook when a data breach notification must happen? There's a lot of different stuff going on there. I don't know if there's a piece or two of that that you would like to take. Well, so in situations where you're dealing with a client who has to submit data remotely, if you provide them with services like, say, for instance, Ironbox, that data will be encrypted and on its way out, if someone is monitoring with, say, Wireshark packet sniffing the information, all they're going to see is encrypted data. So your client essentially will be protected from that. As long as the communication is encrypted, you'll be fine. So that's one component. What was one of the other pieces in the topic in inserting the comment? I mean, when are you off the hook, or how does that interact with data breach notification that must happen? Well, I'm not an expert with sort of the data breach and the policies and the legal aspect to it. I think that may be a question more for Amy to answer. Yeah, and those data breach notifications are often state by state. So answering it generally is very difficult outside of maybe the medical-legal partnership context where partners that you're working with may be covered entities under HIPAA. I don't have an easy answer for that one, Darren. It's a great question, though. All right, thank you very much. Again, it's Jay Stanley from the ACLU. So just by way of thousand-foot introduction, we're living in an age of data now, just as in the age of machinery, in the industrial age, everybody began to see everything as a machine, human body, organizations, et cetera. Now we're seeing everything in terms of data. And we are looking especially at big data and everybody's exploring how it can be used, what it can be used for, what the limits are, of its applicability, what the pitfalls are, et cetera. And I think that this is something that's going to be interacting with all of our lives in the coming years. Big data is a term that's tossed around all the time. It's got a broad definition and a more precise definition. Often it's used just to mean, well, there's a lot of data about a lot of things floating around. But the more precise definition is the fact that you can use large amounts of data to find statistical correlations that the human mind can't perceive. It's often called the macroscope, as opposed to a microscope or a telescope. And so we have the field of predictive analytics and machine learning. One of the examples that's often that's become sort of a canonical example is this reporting by the New York Times about how Target is using analytics on its data. Yeah, so basically what happened is that they found, a data scientist found that he was able to identify about 20 products that were analyzed together and indicate when a customer is pregnant and also of her due date. And there was a guy who was mad because his daughter was getting pregnancy, coupons, et cetera. And it turned out that she was actually, in fact, pregnant and Target knew that before her father did. Data analytics is also being experimented with in the national security context. This is a slide from one of the slide decks released by Edward Snowden about a program called SkyNet, which has been used apparently, reportedly to Target. It was gonna be the subject of military strikes. Not all uses, of course, are spooky. There are many good uses of data analytics that are beginning to permeate our lives. Everything from computer vision, which is getting amazingly good, automated cars, et cetera, to healthcare where there's the potential that using large amounts of data, very important discoveries can be made. One of the areas in which we're beginning to see it explored is in the delivery of government services. This is a story, for example, about a program to try and use big data to identify households that are most at risk of being evicted and to Target those households with services. And I think that we're gonna see big data analytics explored in more and more areas like this, not only in the areas that range speaking privacy questions, but in areas where government agencies are trying to provide services to people. And some of those agencies may try and use analytics themselves. And in other cases, they may be helping people who are the victims of analytics. I'm gonna introduce nine questions that I think everybody needs to ask when data analytics are used in this way. The first question is, are the judgments being made that data analytics are used for? Do they actually lend themselves to analytics or machine learning? There are machine learning and analytics can work very well, amazingly well in some areas where there's actually a large amount of training data that can be used to form models or to allow the machine to create its own models via machine learning. But there are other areas that don't. For example, credit card companies have become very good at spotting potential fraud because people tend to, for example, their patterns, such as people who steal credit cards sometimes go to a gas station to see if the credit card is good and if it is, they go and buy something very expensive and so that's become a flag for the credit card company because they have billions and billions of transactions. Other areas such as trying to identify potential terrorists, which is something that the National Security Association has experimented with, are much, much dodgier because they're just using that much data about terrorists and their patterns. So that's the foundational question that needs to be asked. The next question is, is the analytics discriminatory? And there's a lot of potential for data analytics and algorithms to be discriminatory. Here's a study done by Professor Watania Sweeney at Harvard and she found that Google searches for names that are typically more used by African Americans such as Trayvon, Sean Darnell, and Jermaine on Google's website with throw-up ads for public records about arrest records, whereas primarily white names like Jeffrey, Jill, and Emma did not generate those ads. Nobody thinks that Google was sitting in Mountain View programming in this logic. The logic was probably developed by machine learning and it incorporated and reflected the racism that's out there in society and it became absorbed into the algorithm. Another example of this is the TSA, which in the Bush administration was pushing a program called Secure Flight, which we now have. It's actually called... And part of the original vision of that was that people, they would do background checks on every flyer to measure how, quote-unquote, rooted in their community they were. Well, it turns out that African Americans move more often than white people. So even though that was not the intent, that any kind of measure that tried to measure that would be racially discriminatory. So the racially discriminatory aspects of our society creep into computer algorithms even when nobody intends that to happen. And this was taken from the news, the Tate Tweets, which was a bot created by Microsoft which was supposed to learn from its interlocutors and quickly began spewing racist information because people were trolling it. There's a quote at the bottom from the Microsoft developers who were very abashed after this happened. AI systems feed off both the positive and negative interactions with people and that's the heart of the issue here. Analytics fair, or does it incorporate guilt by association? Racism in some categories is a type of guilt by association but can be unfair in other contexts. This is a story that was on Good Morning America about a guy who found that his credit card rating had been lowered. And when he called it to find out why his credit card rating had been lowered, he was told it was because other customers who've used their car at places where he'd shopped had had poor repayment history. So in other words, he'd shopped at stores where other customers had bad credit so his credit for it was lowered. Now there's no, I have no reason to believe that the analytics that American Express used for that were invalid, it was probably true but from his point of view it was still grossly unfair. So analytics can be grossly unfair even when it's accurate and sometimes especially when it's accurate. Next question. Next slide, thank you. I mean this is a key question, how accurate are the analytics? Analytics that are accurate obviously can be damaging in ways that go beyond fairness. What are the consequences of error? If your algorithm for identifying people who need more aid or greater risk of this out of the other thing are inaccurate, what are the consequences of that? That's a fundamental question you need to be asked. Here's an area where this came up. The Chicago police used a heat list using analytics to try and predict who would be involved in crime. Some people call it the list of the most dangerous people in Chicago. Those people, and we were asked what we thought of this and how do you evaluate something like this? Is this a good thing or is it a bad thing? Well, everything depends on what is done with it. What are the consequences if it's wrong? Here are the people who are identified. Do they get extra aid and benefits and support that they might otherwise get? Or are they getting police surveillance, intimidating visits, et cetera? And so that is one of the key questions that has to be asked whatever analytics are deployed. Next question. And again, I mean one of the differences, the basic message here is incentives matter. If you have a bureaucracy that is a security agency that is interested in looking into people's lives and putting them on watch lists, that's one thing. If you have an agency whose bureaucratic mission is fundamentally to help people, that matters and it's important. So that's an important question. Are the interests of the agency... Does the agency have an interest in the well-being of the people that it's trying to identify via the algorithm or is it going to hurt the people that it identifies? What does the analytics replace? You're trying to identify people who are at higher risk of something. What is that replacing it with? I mean, are you using monkeys throwing darts at a map? Or more seriously, are you using highly subjective measures, you know, guesses by poorly trained staff in which case an analytics that might not be that inaccurate might not be so bad compared to what it's replacing. On the other hand, if you're replacing highly expert and accurate human decisions with something that's much more inconsistent and quirky, that's not so good. And a really good question. I mean, if you have a group of people who have already been identified that you don't have enough resources to help them all, then if it's merely triaging within a group and making a guess at who needs the most help and if not everybody is going to get help anyway, then the consequences of error, again, are much less serious. So sort of three variations on the same question. Does the program create consensus? The program might do a lot of good. It might be helping people, but it might involve incentivizing and creating huge data stores in a bureaucracy that might hurt people in other ways. Obviously, as Amy and others have talked about, if you have large data stores, it creates serious security questions for one. And even if you're helping people in one context, if you're compiling a lot of data about them in order to try and figure out who needs the help, who doesn't, it may come back. The judgments and the data could be used in other ways that could be harmful to them, which is really just another question that is important and needs to be asked here. So that's the nine questions that I had about these kinds of uses of big data. The bottom line is that we really are in a new era. Everybody is exploring how machine learning and data analytics and algorithms can be used in government and aid processes to try to figure out if things can be improved, if services can be delivered more helpfully and so forth. But there's a lot of get-falls. Experts need to be looking at these things. There are questions about the sources of the data, the accuracy of the data, the accuracy of the predictions made by the algorithms. And it's really a wild world out there, and we're just beginning to try and figure it out. So everybody needs to proceed with caution and ask questions like the ones that I have of my contact information. And that's what I had for today. Thank you all very much. Great. This is Wilnita again. I just want to thank everyone again and encourage people to email me any questions that they have. I can help connect you with Amy, Jay, or Joe, or Mike if you have anything that comes to your mind. One thing that we struggled was just the scope of how to present this information. So if you have any ideas on how we can present this in an easier-to-digest format for the civil justice community, please, I'd love to hear any feedback that you have. And thank you, Ellis Entepen and Brian, for all your help. Well, then, thank you so much for putting this on. We greatly appreciate it. Interesting topic, a lot of stuff to think about here.