 Good afternoon and welcome to our webinar, where we are going to be discussing the General Data Protection Regulation from the EU and the Australian Notifiable Data Breaches Scheme. I'm Kate LeMay and I'm from the Australian Research Data Commons. We are an amalgamation of the Australian National Data Service NECDA and RDS, which are three NCRS facilities and we are funded through NCRS. If you are interested in information about where we've come from and where we're going to please sign up to our newsletter and check out our websites. So I just wanted to quickly give a disclaimer that all information that we're going to be speaking about today is of a general nature. If you have any specific questions about your situation, you need to seek some legal advice. We have a great range of speakers today for our webinar and I'm very pleased that we've got all of these experts to be able to speak to you and I'm going to hand over in the interest of time straight to Anna Johnston from Salinger Privacy who is going to start us off today. Thank you Kate. Afternoon or good morning for those of you in the West Coast. I am the Director of Salinger Privacy so we're a specialist privacy consulting training and publications firm. I come from a legal background but I'm no longer practising as a lawyer. So just bear that in mind as Kate said we're not doing legal advice here today but what I am going to be talking about is the kind of privacy law landscape that applies to researchers or people working around research and in particular what issues tend to come up most frequently for people working in the research sector. So we, you know, on the consulting side some of our clients are organisations conducting research projects or quite often now we're seeing lots of big data, data analytics, detailed kind of program evaluations, these sorts of projects coming up that we get asked to advise on in terms of the privacy implications of those and on our training side we also run some workshops on behalf of practice for members of Human Research Ethics Committees. So on both sides of the business we have a fair bit to do with people working in the analytics and research space so in this session I'm just going to give a kind of tiny taster of the scope of privacy issues that we see coming up often in this context. So what I'm going to be talking about is as I mentioned the kind of regulatory landscape and then the most common privacy issues and the two hot topics we see over and over again are around consent and the identification of data. A little bit about the new legal privacy developments around GDPR so European privacy law and that has reached into Australia and notifiable data breaches and then what's coming next and then our other speakers will speak more about those topics. So we have, if you weren't already aware, we have this kind of patchwork system of privacy laws in Australia. It can start to get quite confusing for a researcher who might be working at an institution covered by state privacy laws so working or supervised at a public university for example or within the public hospital system so they're typically covered by state privacy laws but then say that researcher might be wanting to access data from an organisation covered by a different privacy law say the Federal Privacy Act so which covers federal government agencies, most private sector businesses including particularly the health sector or private health sector operators no matter how big or small they are. So because we're talking in this limited time frame we've got today I'm not going to obviously cover that entire patchwork but just to recognise that it exists and in that research space you are often having to navigate across a mixture of state and federal privacy laws. So I'm really only just going to talk today taking as an example the rule from the Federal Privacy Act about using and disclosing personal information. So Australian Privacy Principle APP6 regulates how personal information can be used or disclosed. So first of all if you didn't already know the definition of personal information is incredibly broad it's not just what you might consider private or sensitive it's any information or an opinion about an individual who is either identified or reasonably identifiable. So that is the scope of what's covered by personal information it's incredibly broad and the Australian Privacy Principles regulate how organisations handle the personal information that they've got they're collecting or holding. So if that organisation wants to use or disclose that personal information it has to follow APP number six and this would include let's say an organisation being asked to disclose personal information about its patients, its students, its customers to a researcher who might be from within the organisation or somewhere else entirely. So that organisation can use or disclose that personal information for a number of different or under a number of different grounds. The first is if that use or disclosure is for the primary purpose of collection. So let's say that you are a company that sells genes your primary purpose of collecting information about your customers is to sell them genes maybe online you're going to ask what their gene size is the kind of genes they want to buy that you take some money you get a shipping address you send it out to them that's your primary purpose conducting research into what shop is like about genes is not part of your primary purpose. The next test is they can use or disclose the personal information for a directly related secondary purpose within the customer's reasonable expectations. So this might include, for example, processing refunds about genes that have been returned because they're the wrong size or whatever. The next reason you can use or disclose personal information is if it is required or authorised by another law. So some other law says you have to or can use or disclose information in some other way or if you can't meet one of those first three tests, then you are looking to either you get the person's consent and we'll talk about in a second what that means or you need to look for one of a number of public interest exemptions. And there's a whole bunch of other exemptions, national security, law enforcement, fine missing person. There's also some research exemptions. So in the context of this rule, some of the common privacy issues I get asked about, first of all, is this data we're talking about even personal information? Because this is a real threshold issue. If it's not personal information, you don't need to apply the privacy rules at all. So is it personal information? What's it's been to identify? That's one common question. The next is what does consent mean in practice? How do I get it? What does it look like? And then the third one is, well, what does the research exemption say? How do I meet the tests? So on this first question, it won't be personal information if the data has been de-identified, but what does that actually mean in practice? So I'll mention shortly this GDPR, the New European Privacy Law, it actually sets a test, which is quite tough. It says the privacy law only, sorry, doesn't apply only if you have rendered the data to a point where no one is identifiable at all from it. Whereas the test under the Australian Federal Privacy Act says personal information has been de-identified and therefore is no longer covered by the privacy rules if the information is no longer about an identifiable individual or an individual who's reasonably identifiable. So under the Australian test, de-identified data is low risk, but not zero risk of re-identification. So it's not necessarily meeting the same zero risk test as under European privacy law. So that's something to bear in mind. I think there's a lot of confusion around the what de-identification means. Sometimes it's treated as a noun. Sometimes it's just a verb, if you like. So we, and by we I mean so much privacy. Our approach is to say de-identification is a process. It's not the end state. So you might use the language to de-identify or to anonymise or to confidentialise. It's to do something to the data to try and break that identifiable link back to an individual. So it's a set of processes. It's not necessarily a promise that it is perfectly anonymous or that the that there is zero risk of re-identification. So de-identified data means, in my view, data to which a de-identification process has been applied. It's not necessarily a statement that the data is anonymous or free of privacy risk. When de-identification is useful, you know, its utility is at a number of points. One is if an organisation wants to make its data perfectly anonymous, which is obviously hard to do. But the objective might be to say, well, now we don't need to worry about privacy or Europeans call it data protection law at all. But that's, as I said, very difficult to achieve. Other reasons why organisations might want to look at de-identification include minimising data security risks. And so, for example, when data is in transit between organisations, sometimes it's building into the design of a new system or a process or a database or a technology or whatever it is. And sometimes it's to allow the secondary use or disclosure of information, which is the most likely scenario in research. So under GDPR, there's a rule about legitimate interest. It might be easier to meet that test to enable you to use or disclose personal information if you've tried to at least de-identify it. And under some research exemptions, again, differs between states, territories, the federal, but often the Human Research Ethics Committee, their approval going forward for that project might require the organisation that holds the data to at least attempt to de-identify it before even giving it to the researcher. And almost certainly there'll be a requirement on the researcher to de-identify the results of their research before they publish. Now onto the question of consent. So remembering back to that rule under APP6, an organisation can use or disclose personal information if they have the subjects consent, the person who the information is about. But consent under privacy law, if you're going to rely on that ground to let you use or disclose the personal information, it's quite, well, the law sets quite a high bar. So to be valid under privacy law, consent must be voluntary, meaning the person was free to say no and not suffer any repercussions. It must be informed and specific. So they need to be told what kind of research is going on here. It needs to be current. You can't rely on something that's too old and obviously given by person with capacity. So difficult for younger children, adults with a quite brain injury, injury, intellectual disability, etc. So in short, consent must be proactive. Sometimes this is described as opt in. Consent will only be valid if, as I said, the person was free to say no and they still chose to say yes. So they have to proactively tick a box to say yes, if you like. It must be as easy to withdraw consent as it was to give it. So if they've said yes and then they change their mind and want to say no, you can only still say that you had valid consent if it was easy for them to turn around and change their mind later. So it can't be a condition of doing business with an organisation. So back to my example of the website that sells jeans online. Cannot say it is a condition of buying jeans from us that you consent in inverted commas for your data to be you know, used for this research project or to, you know, be shared with Facebook, for example, consent can only be relied on if the person had the ability to say no and still receive the goods or services they were after buying the jeans in the first place. For example, so you can't as a researcher, you can't get someone's consent or infer a customer's consent or a person's consent to something simply because it was included in terms of conditions of a website they used or an app they downloaded. It can't be buried in a collection notice. It can't be buried in an organisation's privacy policy. It has to be a separate, proactive opt-in process that the person has quite actively participated in. However, of course, lots of time, research won't be on the basis of consent. And the research exemptions. So again, I'm taking the federal privacy act example here often has some rule saying, well, if you can't get consent, which is kind of the gold standard, here's what you need to do next. So a research exemption might say something along the lines of the researcher needs to demonstrate to the Human Research Ethics Committee, for example, that it is impracticable to seek consent. And again, that standard set quite high. So the fact that it's just expensive or inconvenient or a hassle or take some time is usually not enough. The researcher needs to be able to demonstrate that it is going to be at least in inverted commas. Very difficult to find the individuals. So on to new developments in privacy law. So under the, again, the Federal Privacy Act in Australia since February this year, we've had a new law introduced or an amendment to the law that introduces mandatory notification of data breaches. I'll cover that in a sec. And the other one is, as I've mentioned, the GDPR, the General Data Protection Regulation, which is a European Privacy Law, which has some reach into Australia. So data breach notification first. So the scope of this new law, there's three types of organisations covered by this law, meaning if you have a data breach, you need to follow these new rules. The first is all organisations that hold tax file numbers. Now, this is actually almost every organisation you can think of in Australia, because as employers, as organisations employ staff, they collect tax file numbers about the staff, if nothing else. There's also a whole bunch of organisations that hold tax file numbers for other reasons, like banks, superannuation funds and obviously the tax office. But at the very least, lots of organisations hold tax file numbers and they may be caught by this rule in the Federal Privacy Act, even if the normal privacy principles under that Act don't apply to them. So state public hospitals, public universities, as I mentioned before, are covered by state privacy law. But to the extent that they hold tax file numbers, they're also covered by this particular part of the Federal Privacy Act. So that's the first category. The second one is credit providers, credit reporting bodies. I'm not going to talk about them. And then the third is any organisations that are known as APP entities. I'll say what they are in a second, to the extent that they have a data breach involving personal information. So if you are at a organisation that has a data breach involving tax file numbers, you'll be covered by this. If you are an organisation already covered by the APPs in the Privacy Act and if you have a data breach involving any kind of personal information, you'll be covered by these new rules. So APP entities means all Australian government, federal government agencies, almost all businesses and non-profits with a turnover of more than $3 million a year. Then we've also got all private sector health service providers, even if they're under that $3 million turnover rule. Any organisation that is a contracted service provider to the Commonwealth, again, even if their turnover is less than $3 million. And then some specific organisations covered by anti-money laundering rules, no need to go into that. So what is required? Data breach means loss, unauthorised access or unauthorised disclosure of tax file numbers, credit information or personal information, those three categories. What makes a data breach notifiable is if that data breach is likely to result in serious harm to more than one, sorry, one or more individuals. And what's required is notification as soon as possible to the Office of the Australian Information Commissioner. This is where the Privacy Commissioner sits and any affected individuals. And there's some hefty fines for non-compliance. GDPR has had a lot of hype. This is the new European Privacy Law. I'm going to go very quickly through this because I can see the times already being used up. There's been a lot of hype. My suggestion is don't believe most of that hype. Lots of people claim it's revolutionary. It's going to we have to treat European citizens differently. Some people think it requires you to get consent for everything. This is not true. Some people believe it's really easy to get consent. You just put it in your terms and conditions, make people click. Yes, that's also untrue. And there's a belief that this new right to a razor is going to ruin everything, including make research impossible. So just briefly GDPR as an overview. It's an update of existing privacy laws that hasn't come completely out of the blue. But the big changes and the reason it gets lots of attention on lots of hype are the really significant penalties, 20 million euros or 4% of global annual turnover. This is aimed squarely at the big tech companies. And this is the other novel part of it. It has extended reach outside of Europe. But this isn't relating to all data about European citizens. What it actually says is so if you are an organisation in Australia of any size, get small businesses included. But if you offer goods or services to so you sell genes online to people in the EU or you monitor the behaviour of people in the EU. If you're doing that actively, proactively, then you might be covered by GDPR. I'm going to skip over what GDPR says. I'm going to go straight to research under the GDPR. So again, it's it's not expressed in the same ways as our Australian privacy principles, but basically it has rules about. We would put it in the language of there's all sorts of rules about the primary purpose for which an organisation might collect and use personal data is the phrase used in Europe. And as long as you've collected and using it under one of these grounds, one of which is with consent, but there's five other grounds as well. Then it says, well, we've got this idea of compatible purposes that will also be allowed. And this includes research in the public interest. It talks about very much that the default position, however, for protecting privacy during a research activity or project will be to aim for anonymisation or at least pseudonymisation. And there's been, as I mentioned, a lot of focus on the right to erasure, but it does not apply to research data. People do have the right to object to their data being used in research, and in that case, the organisation needs to demonstrate that the public interest in the research project outweighs the individual's right to privacy. So that's pretty much it from me, except to mention the next big things coming in terms of privacy law in Australia. The federal government, the Department of Prime Minister and Cabinet is currently drafting a data sharing and release bill. The whole idea is to open up more government held data sets for research. That bill will establish the National Data Commissioner. Though the original version talked about the National Data Custodian. I don't know why there's been a change in that title, but it'll be interesting to see how the government sets up the functions and responsibility of that role. And then we've also got this idea of a consumer data rights, so having greater data portability. So just finally, from me, from our company, we've got a few different resources that that might be of interest or a benefit to you. And you can have a look at the slides later. No doubt. So thank you for listening to my little spiel, and I'll be hanging around and answering questions later. Thanks for having me. I'm going to very briefly take you through some of our experiences in dealing with the notifiable data breach regime in the context of the page-up breach. So this is from Macquarie University for a second. Why do universities need to comply because we're not governed by the Commonwealth Privacy Act? So the notifiable data breach scheme, yes, it is under the Show and Privacy Act. And while we are primarily governed by the New South Wales Privacy and Personal Information Protection Act, the notifiable data breach scheme is applicable to our controlled entities, and consequently we decided to proactively adopt the scheme. This is in part because of the significant crossover in systems and people between the university and controlled entities, and it's very hard for external stakeholders to be able to differentiate between the two. So this kind of allows for a consistent approach. So what have we done? So the notifiable data breach response plan, that's our sub-plan of our Incident and Crisis Management framework. And it follows the same escalation process as step zero. So we still most of see the actual plan from the one that's available from the OIC, which is actually very helpful. So for step zero, we have four different levels of crisis management in terms of our escalation processes. So step level one, a breach is minor and it might already be contained, and then we already just do a notification, usually business as usual type, type or a single incident. Level two, it's the breach is significant, but it's contained. We will notify our crisis management team coordinator who will operationalise our crisis and incident response team as required. Level three, that's an uncontained major incident where the extent is not yet known or the breach is still occurring. So if we've seen some malicious activity occurring on our network, for example, and we're not sure to the extent of which it's actually proceeded. So we'll inform our crisis management team coordinator as soon as possible, and they'll definitely operationalise our crisis and incident response team. Level four, we call that a critical incident and obviously inform the coordinator and then we'll operationalise our response team who will then respond to LBC as well. So an incident investigation team will be appointed through each of these phases as well. With necessary advisers and expertise, usually in the case of a data breach, we will have the information security manager involved and the privacy officer, so myself. The levels that I've just talked you through, they also assist in determining the extent of the investigation and the senior management involvement that's required when responding to an incident. They also, within our crisis management plan, they ensure that our communications are consistent, streamlined, and responsibilities and accountability is very clearly defined, including where we're going to actually notify to external stakeholders or whether we're going to keep them internal. So there's a bit of a background about what happened to PageUp. On the 23rd of May, the malicious activity was detected by PageUp and they launched a forensic investigation quite quickly. PageUp went public with a breach on the 5th of June, but they couldn't say at that point whether the client data had actually been compromised at that point. Once the initial forensic investigation was performed, it was actually determined that some personal information was impacted. This included things like contacting details, including name, email address, physical address and telephone number, biographical details, so gender, date of birth, middle name if you have one, nationality and whether you're a local resident at the time of the application. And then employment details as well. So this includes things like the current employment status, company and title. And if your application had gone through to a referee check, then some additional details would have been included in that as well, such as your technical skills, special skills, the size of the team that you're working in, the length of tenure at your company, the reason for leaving that position is involved and the length of relationship between the application, the applicant and the referee. So some of the more critical data, such as resume, financial information, your telephone number and employment reports and contracts, they weren't affected in this instance, so payouts have several different modules where information is stored, so they weren't able to get to these other modules. So no data included in their new starter forms, onboarding or performance and learning modules were actually affected by that. So then on the 18th of June, once they had confirmation of what was actually potentially compromised, they released a joint statement through the OIC. So then on the 21st of June, we determined that the breach was also notifiable. We did call the OIC to determine whether we did have to go through the verification process. So it was very helpful in walking us through this. There are some guidance on the OIC website that states that if there's more than one entity that has been involved in a data breach, that only one entity needs to notify. However, in this instance, it was a little bit confusing for an stakeholder, so the applicants. The many of the users in our recruitment system, they might have not even been aware that they were affected by the page outreach. So in that context, we decided that we were going to notify individuals as well, just to make sure that they were aware of what was going on. So this included notifying fire email around 86,000 affected individuals, quite a few people the following day. We then formed a response team to deal with any queries that came through or any concerns from our prior applicants. So this included myself, the cyber security manager, and also somebody from HR, one of our HR business panelists, to make sure we had a consistent approach to responding to all of these queries. This was in part because we thought that there was three possible avenues that they could have gone down if they were concerned. Two emails were included in the actual communication through the affected individuals, so cyber security and myself, and we needed to make sure that we were consistently responding to them. And if they had actually emailed multiple stakeholders, making sure they had one contact that they were going through. So we had about, we only had about 70 people actually contact us with concerns. Most of them were actually requesting that their information would be to leashes from paid up and our additional databases as well. So it wasn't as bad as what we actually thought, but making sure things were consistent, we were really keen dealing with how we approached that. So lessons learned once the initial period of response was decided to slow down a little bit. We conducted a lesson learned meeting to understand what the could have done a little bit better. So first of all communication, communication is absolutely okay. So first of all with paid up, paid up were really forthcoming with their communications to us, which assisted in the notification process. However, in saying that, it's very handy to be aware of the guidance that some of the others were providing. In being a third party, there were quite a few people who also notified and each of them had a little bit of a varied response in how they were advising their individuals to deal with the breach. So in this instance, some companies that actually notified individuals that they could alter their profile themselves and delete the information, the profile themselves. But this wasn't something that we had in our, on our profile. And it may have caused frustration to some of our users. Internally, we have quite a collaborative working relationship with cyber security and HR. So we were really easily able to form a team to respond to this situation and have a unified response to our queries. Also, by using our instance and crisis management response, we were really clear on who was responsible for communication, both to our applicants and to the regulator as well. So we only have one person speaking to the OARC to make sure we have the same approach the entire way through. So secondly, our flexibility and response. So we really need to understand the interplay of the other various legislation. So in particular, this time round, it was the State Records Act. We're required to retain the applicant information for two years after a job has been filled. However, this legislation isn't widely understood by the public. So you really need to understand that you ensure you understand the retention requirements and have a flexible response to those who do want their data deleted. In this instance, where people were really concerned, we archived their information on some of our internal systems where their information couldn't be just hard deleted by the applicant's request. And most people were quite happy with that. Thirdly, use the available resources. The OARC actually has some really good tools on their website. We did call them on numerous occasions to get some guidance on the notification process. This was the first time we actually had to do it. They also assisted us on making a call whether to notify or not. They helped walk us through how individual stakeholders would actually receive this data breach. And what was going to be required is the notification process. Also, the notification tool on the OARC's website was actually really easy to use and really helpful. Then lastly, make sure that you examine your contractual arrangement. So one key thing that we probably could have done a bit better is having had the data retention elements in our service agreement more tightly refined. So many of the apps individuals on our notification list had applied through to the university many years ago. So we probably could have reduced the number of notifications that we actually have to make. Ideally, so the more information that's unnecessarily retained, the greater the risk it is if you lose it and the greater the administrative burden as we learned. So then one of the biggest learning points overall was how interlinked privacy is with many of our processes across the university. It's really key to have privacy by design as an approach to ensure situations like this to respond to in a timely manner and ensuring staff across the university are comfortable with escalation and communication and potential privacy breaches. And I think that's where we have our notepad or data reach plan as a sub-plan about this crisis management framework also indicates how seriously would take these issues. So this also ensured that we had that our communication to external regulators like the OIC they're consistent and they're also in line with our reporting requirements as well. And just picking up where we've left off a great presentation there on page up in Macquarie University. We're going to be talking in this session about the rights of people to control their own data. And whether that is an individual consumer a small to medium enterprise or large business or actually a government entity it's all about data rights. Now where does that data reside is a very important question. Who stores it and how do they store it and what kind of agreement is used to transfer that data between say the consumer and the business entity and potentially one business entity and another and even a business entity and a government. And that kind of third party transference is what we're talking about in this era of open banking of which our next speaker will be talking about. We'll be looking at data portability as stipulated for instance in the GDPR and also in the Australian consumer data right law that's being proposed. How does consent work between a consumer and a third party and what rights does the consumer have to know about what data is stored on them and we're not just talking about personal information we're also talking about data about relating to consumers. And so we looked at for instance the Facebook and Cambridge Analytica scandal sure consumers can actually download all the data that they've offered up freely on the Facebook platform. But how that data is proactively profiled how it's matched up with third party information wasn't actually made aware to most of the Facebook subscribers on the platform. And when individuals started making requests about their data and its proactive profiling or relationship to advertising and some would say manipulation they found a lot more for instance some people were identified as having 5000 data points related to their personal information. The other question is how we treat sensitive information in the consumer data right. Who the accrediting companies are to allow for that transference of information and how they become accredited what consent actually means. Whether if a company for example has 250 or more employees and is doing business with the EU whether they are keeping adequate documentation on the actual databases they are storing of personal information or other information for example the type of attributes how long that's going to be maintained for and why it's being kept and how that information may be shared with other third parties all this information now has to be documented in good security profile practices and that's what it's all about the better our security practices are the more we can say there'll be less harm on individual consumers. Now on this slide I have identified business data rights and also government data rights in the US for instance government data rights are as pronounced as consumer data rights although most people don't believe there is actually adequate privacy here in the States. So a government data right usually takes the form of licensing governments don't actually own or have titles to data but what they do is they offer licensing schemes particularly for technical data and computer software and they contract out to a small organization and say we are going to actually license this out although it's an exclusive relationship but back on the consumer data right increasingly we're going to see utilities wanting to have data portability to ensure perhaps the best price on offer for that individual subscriber and to be able to compare prices between one you know provider and another companies will have to abide in the consumer data rights by three things the privacy safeguards that are stipulated in the bill the Australian privacy principles and if they do business with the EU the GDPR and how to segregate that data will become important to prove and have evidence that actually the companies abiding by the consumer data rights. So I'll probably leave it at that information to give our last speaker some time to talk about the movement towards open banking and just to say one of the things that is occurring is that possibly the blockchain may well be one way to facilitate the accreditation and the transference of data between third parties and consumers to have the right to know what is actually being stored about them. Thank you. What I'm going to cover is looking particularly the issues arriving out of quote unquote open data and particularly questions about re-identification that Anna's already touched on and questions about informed consent that go to that but I'll be probably presenting a slightly more sort of critical view than some of the others. I start off as someone who's had some exposure to the health research ethics committee I've sat on that sort of briefly. I've also done reports for the New South Wales and federal governments about sort of open data and those risks helped this foundation of the Centre for Health Informatics and the Data to Decisions Cooperative Research Centre. So you know I'm not a sort of that much of an outsider in the sense of not being exposed to this but I do probably come from the consumer citizen or civil rights advocate sort of perspective and I think it's important for people on the inside of the data using community particularly in research to be aware of their I suppose strict legal compliance obligations but also this is an area where trust is extremely important and trust depends on being trustworthy and the real issue is if you do something that ultimately ends up hurting or compromising or damaging the interests of data subjects that you're working for then that trust disappears very quickly and so there's an element of this that's about sort of compliance and reasonable sort of business and sort of research behaviour but there's also an element about being aware of the potential for the loss of trust to have quite serious consequences for both individual research projects but also the capacity to sort of continue after a particularly large disaster. The next question I'd like to touch on sort of a preliminary one is about the terminology and the use of sort of framing words to I suppose guide or focus how people think about what they're dealing with. There's a very brilliant short work called Don't Think About an Elephant by Lakoff, L-A-K-O-F where he talks about the use of words to essentially win the debate before you even start by framing what the sort of mental image or the sort of the narrative is going to be all about. So the particular sort of words that I'm concerned about here are open as in sort of open data sharing which is sort of a concept that's been popularised by say Facebook and also to a less extent the idea of rights. Now I think essentially the problem is that we're not in a fair fight. We're not in a reasonable open sort of discussion about this. We're in a sort of a public sort of domain where the what's known as behavioural economics or the sort of nudge theory of government that sort of to some extent came out of some British developments thinks that it's okay to push people in the direction of doing something that you claim to be sort of beneficial without necessarily having an open conscious sort of rational sort of argument about it. If you can just make them more likely to do what you want then that's fine. What this ends up in is the fiasco that we see now with the my health record which is not actually a clinical record. There the concept of consent is abused because there's it's not informed and there's no consent involved but a lot of the messaging behind it or rather the the reluctance to have a messaging that says what it is and what it isn't and to discuss any risks those sort of things that normally you would expect with informed consent in a you know a medical or a research sort of context. The justification for quite manipulative use of language and of messaging is on the basis of you know the behavioural economics units trying to use nudge tactics just to get people to sign up. So my I suppose caution is that that is not an isolated incident. When we hear of something described as open data that's a brand that's designed to sort of discourage I suppose critical thinking about what it is and in the same way sharing in a sense that's use and disclosure without necessarily having the consent of the individual. Open data is often personal information that's been weakly de-identified and republished without the host taking responsibility. Those sorts of you know much longer and less snazzy titles are more likely to point attention to the sort of risks and problems you're dealing with but they're anathema to those who want to essentially use it for PR purposes. Anyway I'll stop the rant on that topic. What I do want to say though is that the use of terms like open data and sharing are not neutral they're not necessarily accurate they are sort of part of an attempt to normalize some of the activities there which when you look closely may be problematic. When we're looking at open data there is many types of data in the report that I did for the Commonwealth. There's lots of stuff that are not sensitive information that are not personal information that is it's absolutely fantastic to use the sort of ideas from the open source software movement and the open content movements of encouraging more relaxed publication and use of that sort of information rather than insistence on traditional sort of strict proprietary rights to lock things down. The problem comes around when you look at the sort of information that should not be essentially published to the world as quote unquote open data and the obvious one would be personal information. Recently we had a visit from the UN special rapporteur on big data and open data and at the forum at that it was generally conceded there was just absolutely no basis for publishing personal information. The real area of dispute was basically to what degree can you ever justify publishing unit level data derived originally from personal information from individual medical records. Obviously this is the sort of thing that a lot of the proponents want to do and there are beneficiaries of this sort of publication. The problem from my perspective is that the re-identification risk we'll get on to in just a second actually is a very serious and profound and long lasting problem that's only getting worse and so in that forum at the UN special rapporteur I mentioned there was a consensus starting to form the need for great caution about in a sense deprecating or perhaps starting off with a presumption against publishing unit level data derived from sort of personal information. I know this is a realm of continuing sort of discussion and controversy but probably the message that you might take about sort of open data is essentially to do an audit and an analysis of the risk profiles of the different sort of information particularly focusing on the possibility of re-identification and essentially you're doing triage you're saying some of these things are pretty well safe they don't need much attention there's no sensitivity risk to that data there's other information that should be left right out and just not touched and there's the category in the middle that's potentially sensitive information that's been there's been a de-identification process that's gone on but there remain question marks about the effectiveness that's what I'll talk again at the moment just before I get off the open data as a general concept to me I would suggest in your thinking about it rather than using the sort of nudge and framing term open data I think of it as poorly de-identified personal information if you're lucky it might be okay if not not that raises the question of risk and so another point before I look at the detail of the question of de-identification is the nature of risk and risk management it's the concern that I have here is that the by publication of information as open data you end up with risk projected onto the data subject that risk is often intangible it's often unclear what it is it's often a very complex set of circumstances that might manifest it the person may never know about that they may never appreciate the harm or discrimination against them or other sort of consequences that may come from that so if you are saying can I get away with this so this is the the sort of Facebook or Google model of move fast and break things and disruptive innovation and you're not wanting to be responsible for stuff the bad answer is yes you can probably get away with it because they won't know the that to me suggests that you're not trustworthy if you're trying to take advantage of your greater control and power and knowledge and the ignorance and sort of I suppose lack of technical capacity on the side of the data subject then you know you're someone who's dangerous I mean you you will get away with it so Facebook for a long time got away with it until one day they didn't and so the the danger is that if your attitude to risk is that you know because it's ambiguous and uncertain and complicated to sort of see how the harm would manifest and you know we can probably escape we're not planning to do an audit we're not asking we're not going to to check for years and years whether for instance the de-identification is broken and we think you know we can get away with it I'll personally be in another job those are some of the things that seem to come out of some of the reviews that I did then that's really quite dangerous on the other hand if you are recognize that those are the things that drive the risk can make it worse then that may may stop you from go much further just mentioning re-identification the big problem is that it's not a one-off thing techniques that were reasonably effective in the past to make it you know difficult to reasonably identify the person afterwards to re-identify likely to fall one by one particularly with the advent of big data advanced analytics machine learning neural networks artificial intelligence all of those techniques mean that what was once accepted as probably good enough in terms of de-identification it's no longer there Anna mentioned that in Australian terms it's not absolute what I would suggest is that yes that's true but the likely risk of future re-identification is only growing and unless you have an engagement with the global debate about this sort of thing and unless you're monitoring and auditing and sort of projecting into the future then it's quite likely you have an under appreciation of that the final thing I might just mention in passing the data sharing and Bill and the consumer data right the issue there is there is no right for the individuals to sue for a breach of privacy and so these I see as very hostile attacks on what should be remedying that great hole in our privacy law the fact that you can't sort of pursue that the right for the consumer data right that's likely although it's presented as a right it's likely to result in pressure to do that sharing and so it doesn't and none of those things look to me like they're starting off from a respectful or trustworthy position that it sounds like they're quite comfortable not to have any of the remedies but particularly the right to sue for a breach to be the platform that would give people rights to use the law that exists already there are sort of a way around that to ignore individual data subjects current weakness anyway look forgive me for taking up so much of the time and thank you for that one person has asked is there a source of truth for which countries GDPR affected one that we can rely on to be updated if countries leave or join I might and this is Anna I might answer that so that everyone is potentially GDPR affected so the GDPR is all about the European Union and there are 28 member states in the European Union and you can just google what are the countries in the European Union it also directly applies to the three countries that are in the European economic area but not in the European Union which is Iceland Norway and Liechtenstein just to be confusing but the the whole point of the GDPR is that it is supposed to have extra territorial reach to anywhere in the globe to any organisation that is actively trying to capture data about people who are in one of those 28 countries so it's not about the citizenship or the residency of your customers it's where they physically are so in terms of privacy rights any of us Australians who go to Italy on holidays when we are in Italy we are in the EU and we have privacy rights under the GDPR and if an Australian business is actively trying to target us while we are on holidays in Italy it will have to comply with the rules under GDPR so there's no definitive list of countries where the GDPR applies other than to say every country in the world if that organisation is actively trying to collect or use data about people who are physically in one of the EU countries and it's pretty it's right Anna as well that it also depends on where the equipment the data equipment is storing the information is it in the EU is it outside the EU and we did see a few companies try to flee Ireland for instance very recently just to escape actually the GDPR because it took its servers out so we are seeing very interesting manoeuvres by large transnational corporations taking pieces of equipment hardware that store information consumer information out of the area the only thing I'd add into that is that there's also the pragmatic end of it you're finding that a lot of the large data giants in the US or and some of the smaller businesses cloud businesses are recognising that in effect the GDPR has set the global stand that the US has sort of vacated the field that they have not attempted to produce sort of comprehensive rights that in a sense apply to other people and many industries and businesses are looking at this and saying well we'd better try and comply with the GDPR as best we can because basically we could be touching on Europeans somewhere so we could be that sort of technically subject to that legal jurisdiction but in any case everybody's heading that way and it makes it simpler and we'll have less trouble if we do you know we've got to do it for someone so we might as well do it for all so in practice there's there's a larger effect than just the narrow specific compliance jurisdiction and we've also had a question around I guess it's related to Brexit really does the GDPR apply to UK and will it in the future when Brexit is completed so that's a really good question so right now it applies to the UK because the UK is one of those 28 member states the UK has flagged its intention to keep complying with the GDPR and act as if it is one of those countries even after Brexit but what they haven't yet negotiated is how the UK will be treated by the remaining countries in the GDPR because one of the rules under the GDPR is all about limiting the cross-border transfer of data so let's say from Germany to England will England become a third-party country like Australia is transfer of data from Germany to Australia and have to start jumping through hoops to allow that transfer to take place so the UK Commissioners Office is actively trying to negotiate that with the European Commission at the moment but at least their intention in the UK is to keep applying the GDPR as its form of domestic privacy law even after Brexit okay one other question was about the notifiable data breaches scheme someone asked if we know of any other universities that have proactively adopted the scheme like Macquarie has I know that there was quite a few universities that were affected by the page-up breach and Macquarie was certainly not the only university that notified as part of that breach are you aware of any other universities that have implemented that or if anyone is aware of any could you pop it in the question box and we'll be able to read that out as well I know I'm not fair enough so if there's any other questions that we didn't get to please put them in the question box and we'll address them later through a Q&A document but I'd just like to thank all of our speakers for making time to come and speak to us today it was a really interesting set of talks and I think that the importance of proper de-identification and consent really came through quite strongly related to that the new national statement on ethical conduct in human research that's owned by NHMRC came out recently the revised version and it has some things to say around identifiability of data and so that would be another place for people to have a look and there's new requirements in there around data management and sharing as well and we had a webinar on that last week so if you're interested in that please check out our recording and I'd also say around open data and personal information that the FiveSafes framework that was developed in the UK and now is being implemented both in the UK and in some Australian government agencies and some other places where they're looking at not only making data safe through some process of de-identification but also looking at a more holistic picture of who is accessing it where are they accessing it and are the uses for which they're proposing to use that data appropriate that FiveSafes framework is a really great framework and it's also being proposed to be used in the new government data sharing and release legislation so I think that's another really great thing to have a look at in this area so thank you very much to our speakers and we will be sending out our recording and our resources and slides to everyone who registered after this so thank you for attending