 Hello everyone and welcome to this UK Data Service webinar introducing big data and social research ethics. I'm Marguerita and I'm a Senior Communications and Impact Officer and presenting today is Libby Bishop, Producer Relations Manager at the UK Data Service. Thank you very much Marguerita and hello to everyone listening and thank you for joining. If you have noticed I have slightly changed the title for today because I do want to focus on this issue of practical ethics for big data research and indeed it is also going to be an introduction. The area has exploded in terms of content and I will catch going to too much detail but I hope this will serve as a useful introduction for many people. In 2014 NHS England announced a database called Care Dot Data and its objective was to integrate patient and general practitioner GP and hospital records. The launch created a big kerfuffle, raised all sorts of problems around consent, there were inadequate opt-out provisions and there were ambiguities about who would be able to use the data in terms of entities such as private insurance companies and pharmaceutical companies. The public outcry actually prompted a formal review that indeed documented and supported many of these initial criticisms. In July of this year NHS England withdrew the proposal for Care Dot Data completely stopping at least for now research that most people agreed could have helped patient health, helped service delivery for the NHS and certainly helped medical research. The key message is that of course when lost public trust cannot easily be regained and I think this when public trust is broken, it vital data that won't be generated, it won't be shared for research, key research on topics ranging from food safety to cancer treatments to immigration and this research is what's at stake and for me it's why we have to find pathways to ethically responsible ways of using big data. I'll come back to this theme about trust throughout the talk today. So as an overview for today, as Marguerita said, I'm going to speak I hope for about 40 minutes, leave plenty of time for questions. I'm going to do brief introductions truly for key ethics issues especially as they relate to human subjects research and privacy. I will do again short introduction on the meaning of big data. I want to go into a bit of detail on three different examples, cases of ways big data have been used for social research and looking at the ethical issues that arose in those uses and I will close with trying to provide lots of pointers and examples to tools, guidance, materials and resources that I hope can help people work in this area. There's a quick overview to get us started. There are hundreds, quite literally probably tens of thousands by now of guidelines, frameworks and so forth about research ethics and they vary across funder, discipline area, professional bodies and so forth. But throughout I've tried to look at one, it's a relatively simple one coming from the Belmont report in the U.S. in 1878 with three broad themes and I think these do really embrace the range of issues that typically arise for ethical issues in social research. So let me go through these briefly. First is respect for persons and that respect includes respecting people's autonomy, autonomy as capable of making independent decisions, autonomy for their individual character as human beings, autonomy meaning and respect meaning that they are to be treated not as mere means, not as instruments to other ends but for in and of themselves. And in many kinds of definitions of respect, particularly those coming out of the U.N., the Declaration of Human Rights and subsequent documents, the Charter and so forth, this includes an explicit protection of privacy. The second is beneficence. Very simply, big word means doing good. Ideally the kind of research we do should do good. That is a laudable goal. It's never that simple. It really is of course a matter of trying to maximize the good and the benefits and minimizing harm and minimizing keeping that to the absolute minimum. Justice. The third principle is justice. An idea of fair distribution of the risks and benefits of doing research. The idea that both the risks and the gains should be shared as equitably as possible. So these principles have been implemented through various structures, ethical review boards, institutional review boards that go under different names, and these typically carry out ethical review of research with humans. Now, all these goals, respect, beneficence, and justice, sound quite commendable. And in fact, they are. But crucially, when you get down to actually implementing them, they're not actually always mutually consistent. We know of cases, certainly in medical research, where it's impossible to protect people from harm altogether. Even in social research, not harming individuals may conflict with doing social good. Under some conditions, extreme conditions, perhaps, but still, we might support the idea of a torture victim publishing data about her experience, even a detailed version of her account, possibly even with attribution. Because of the possibility that that account might help contribute to better social justice. So these principles are very useful, but they're not absolute guidance. They have to be thought through, and their inconsistencies have to be worked through. So this is the case about nearly any ethical question that's significant, that's got real meaning behind it. There are almost never simple right or wrong answers, and it takes real work of moral reasoning in order to reach acceptable solutions. So let me turn now to big data. And probably most of you are fairly familiar with the term. It's certainly practically dominant in media these days. I won't go into hard and fast definitions. You might be familiar with some of them that are used, things like data that's high in volume, large, philosophy, changing quickly, variety, many new genres and shapes of data. I rather like this definition. If it's too big for my hardware and software, then that makes it big data. Let me take a step further and actually ground this idea of big data in specific genres or types of data. I think that makes it easier to get our heads around it. So it's a quite useful document coming out of the OECD that used this phrase and do a novel data. It's not much easier to say than big data, but I think it does get some additional aspects of data that matter. So the genres, things like government data, tax records, licensing, another category of commercial transactions, internet data. We're quite familiar with this from search and social networking activities, tracking data, tracking movement, GIS information, and image data surveillance, and so forth. Because this session will focus largely on big data for social research, meaning dealing with people, I will be looking in particular at social media, social networking sites and the data coming from those, but of course that area is broader than that. So the data is big. It may be new. But what's different about big data? It's certainly valuable for social research. It's got great potential and new capabilities, but it does raise some questions and challenges for social research. One of the most important differences is that big data are usually, not usually, collected by researchers. Nothing is absolute, but that's generalization. And the implication is that it doesn't go through formal ethics review, any of these kinds of systems or protocols created by some of the legislation that I talked about earlier. So that can be an issue in terms of whether or not work projects get vetted for their ethics of standards. Related, big data are usually not generated specifically for research purposes. So think about our tax data information submitted to the government. It can actually and has great value to be used for research, but that's not why it was produced. And when things get used for a different purpose, how they were produced, that can raise questions. And again, in particular, the protections in the human subjects research that are typically used, things like consent and anonymization, have to happen early in the data collection process, at point of collection or early in the processing. And when the data haven't been generated for research purposes, that can sometimes create challenges for working with big data. So the issue really is it's not so much that big data is a problem, but wild data is a problem. That's what gives us our challenges. So let me now turn to our cases. And I first want to look at Twitter. This will be the first where I start thanking lots of people who have helped me learn more about these areas and whose work I'm going to be relying on today, but in particular to this example from this first case, which is looking at Twitter and cyber hate speech. So this is a study by Matt Williams and Peter Burnett from the Social Data Science Lab, which followed the Cosmos Project at Cardiff, and they were using Twitter data. They wanted to find out if a terrorist act triggered an increase in cyber hate language in social media trying to get at this issue of the relationship between what goes on in the physical world and what goes on in social media. So they looked at tweets, following the murder of Reed B. and Bridge Soldier in Woolwich in 2013. And the data, a lot of it, over 400,000 tweets collected over 15 days, and they collected this via streaming API. This is something that Social Data Science Lab has been set up to do. They created a set of keywords using Woolwich, Reed B., and so forth, and collected the tweets. So, of course, having collected the data and doing analysis, they wanted to show, be able to show what cyber hate terms were being used, their frequency and geographic distribution, and of course this had to be done without disclosing identities, trying to stay in line with ethics guidelines. And for the purposes of this kind of aggregated data, it wasn't, certainly doing this work wasn't easy, but the protections that needed to be in place were not especially difficult. The word clouds are not disclosive. You can keep geographic distributions, such as in the maps here, at a level of aggregation, be careful to not disclose identities, and the frequency charts that they produced likewise. So far, so simple. But of course, people want to publish data, not just analyze it, and their issues then arise in dealing with publishing Twitter data. So, of course, there are lots of reasons why you might end up wanting to share or publish data. Funders sometimes require it. Publishers, journals often require it now. You may simply have a desire to share your data in the spirit of open science. All these are good reasons. And in this case, the journal they were submitting a different study to, plus one, required data to be made available. But there is a challenge. Twitter actually doesn't permit the sharing of large, full-tweet data sets with third parties, even archives or publishers. It does permit sharing of limited amounts of 50,000 tweets in non-machine-readable forms, such as PDF. There are multiple reasons for this. I'll just point to one for now. One reason for this policy is that Twitter has a commitment and, indeed, a legal obligation to delete tweets if users ask it to do so. And if data are archived and they're published, essentially those data that are out of Twitter's control, they can't keep that promise to their users. So, Twitter does permit archiving of tweets in user IDs, the kind of material you see here in this column. Quite thin data, but at least you can archive the tweets and IDs. These can be what I call rehydrated. So if someone has access to those IDs, they can go back to Twitter and obtain the data. And that's exactly what they had to do, and that's what Williamson Burnout made clear in this... Sorry, actually different authors, but I don't know some more issue of depositing the data. It was Luke Sloan and others in this data availability statement. So when they went to publish the journal, they had to go into this level of detail about why they couldn't share the full data set, but were able to share what they did. So the world is complicated when it comes, even with anonymized social media, in terms of sharing and publishing. And this result is far from perfect. The original and the rehydrated data sets probably will not match because of deletions. There may be some other technical reasons why as well. Even more troubling, don't always know how they don't match. And for research, that can be more of a trouble even than missing data. Also, a bit uncomfortably, the availability of the capacity to do this at all depends on Twitter's discretion that it continues to permit this to happen. And quite frankly, Twitter's existence disappears or is bought and another company changes the rules. We will lose access to this kind of... We could lose access to this kind of data. So we've got a challenge that we're now working in an arena where we're not meeting what we would like to do, at least by the highest standards for research transparency and replication. It's an imperfect outcome, but there is work happening, and some at least we do have access to these user IDs. In some cases, and I pointed to this really outstanding piece of work like Weller and Kinder Kirlanda, a manifesto for data sharing and social media that goes into this topic in much more detail and explains some instances in the German data archive where more complete Twitter data sets have been able to be preserved. So good success, but still some challenges. But now I want to turn to something else, but before we do that, I want to do a poll. And Marguerita, I think you could help me out with a poll here. Do you think it's acceptable to publish the content of public tweets without anonymising them? Yeah, I've launched a poll. Okay, leave it on for a couple of seconds, and then I'll close that. So three, two, one, and I'm closing it. Right, so 36% said yes, and 64% said no, as you can see on screen. Okay, well, not too surprising, and there's good reason to think that, but now let me tell you a bit more and see how people react to learning what at what the authors actually had to do in order to share tweet content. So there are actually some more constraints from Twitter in terms of sharing and publishing actual tweet content. Twitter specifies that tweets cannot be altered, not even for anonymization purposes. And again, it does come back to this problem about deletion and needing to respect those requests for deletion. But it does seem like publishing unanonymized tweets might be permissible. Again, it's a bit technical. Tweeters do consent to limited third-party reuse. We might stretch that to think that it could be permitted. But really the other argument that's more common is tweets are public. We see them, I'm excluding private tweets here, of course, but they're out there. We see them quoted in journals and newspapers, we tweeted and so forth. And of course, for those of you familiar with the OKCupid case, this was the argument made, the data are public, they're already out there. But as I said, it does get rather complicated for Twitter. And the authors here went back and used, I would say we're going back to some of the first ethics principles, if you will, to rethink whether or not it was acceptable to publish tweet content and came up with a different answer and didn't publish directly. They did go through a consent procedure. Let me try and explain a little bit what that looks like. So firstly, I think it's obvious how tweets can disclose identity. If you publish the text, you can search on a quote from a text, you can, through a search engine, possibly reveal the user ID, and that can connect you back to a real name. So ordinarily, this kind of data, disclosive data with standard practice would be to do consent and anonymization and removing the identifiers. But again, this is what Twitter doesn't permit. So firstly, they had actually done extensive research with users of Twitter and found that a large number objected to unconsented reuse, even, actually, especially if people were identifiable, excuse me. So we can argue about whether or not this is a correct interpretation of terms and conditions, whether or not people are, and so forth. But the point to me is that the researchers here went back to their own, what they could determine, their own users' expectations for the data that they were trying to share to figure out what their understanding was, what should or shouldn't be done. They didn't rest their entire decision on that, on how to handle the situation, but they did use it in what they decided to do. So to me, this, again, comes back to this respect principle, going beyond minimal legal obligations of how to handle difficult data. Moreover, they were able to figure out a process where consent was actually feasible to be able on a small scale, because for this content, what they were looking at was a smaller set of hundreds of the text from tweets and they wanted to be able to reproduce just a subset of those in academic publications and for other dissemination. And they didn't handle a consent procedure through Twitter. They could tweet tweeters directly, sending a tweet to them to request consent, making clear that if they consented, that was going to be without anonymity for the content to be republished. And full information, the equivalent of information on an information sheet, was provided via the website. So the entire procedure could be handled this way and was successfully done so. So again, I think again, they're coming back to this issue though of still protecting content and recognizing that their responsibilities went beyond kind of the minimum of legal obligations or the minimum duties embedded in things like Twitter's terms and conditions. In addition, there's this extra responsibility about a duty of beneficence to participants in the research. So they went on to take into account the fact that there was significant risk in this content. That subject matter was hate speech. Emotions ran very high around this issue, certainly in some parts of the UK. There was that factor. A second factor was the issue had high media coverage. There was visibility about this topic. And they didn't want to subject people to any risk by inappropriately quoting them. So in some cases, the idea of making sure they had consent was necessary and argued for a precautionary approach, which was in fact used in terms of getting consent. But the even better outcome is not only have they done all this work, but they've made it shareable and codified it in a nice document available here to help any of the rest of us trying to handle the sort of data with the decision-making flowchart of how they handle the situation and how other researchers might do it in the future. And it's this kind of material and some other similar sorts of resources that I'll come circle back to at the end in terms of good practice that I think is developing around using big data for social research. Let me now turn to the second case, which is genomes about genomes, anonymity and linkage. This takes us a little bit out of the way of core social data, but it makes a quite important point that I want to stress and particularly around an organization. So the 100 Genomes Project will use as a way of illustrating some points about anonymization. Anonymization has sometimes been seen as sort of a get out of jail free card, a solution that sort of permits people to go ahead with data processing and not having to continue to think too much about ethical concerns. That probably has never really been a very safe belief, but certainly now in an era of big data now, it is no longer an accurate belief. So what happened in the Genome Project was in 2013 in this research done by Gimreck and others published in Science. Researchers identified over 50 people in genetic databases by linking the genetic information with other publicly available data. And that was possible because genetic databases often have in addition to the genetic information itself much other information. Not always essential for the research is worth pointing out, but simply additional data kept on things like birth dates, locations of people and so forth. And of course public genealogy databases hold many of this same kind of information and much of it is public, which is good for public genealogy but raises challenges for disclosure. So this is simply one of the growing number of examples and there are many out there, whether it's health records, being able to identify people based on small numbers of geo locations between work and home, these kinds of things. There's quite a large number of research out there pointing to this kind of thing. And there is extensive debate about this topic. So I just want to raise the point that the conclusion by this report from the Executive Office of the President in the US, but also the Welcome Trust here in the UK and many others is it is increasingly easy to defeat anonymization. And as the size and diversity of available data grows, precisely what's happening with big data, the likelihood of re-identification grows with it and that's the challenge that we face. As I said, the debate is extensive, this is just a handful of a few papers out in the area. There are many more being written, it seems like almost daily sometimes. But here's a point where I want to end up with anonymization because I think it's easy to in a sense throw the baby out of the bathwater. So anonymization thought of as a magic pill is certainly not defensible, as I said, never should have been, but it cannot be relied on in that way. Because of data linkage and the risks that increase particularly with the dimensionality of data, big data, the collection of big data, the collection of longitudinal studies of cohort studies that have literally tens of thousands, if not more variables about individuals. So the risks are there. But and it's a very important qualification, anonymization remains a vital tool of protecting data as long as it is thought of as part of a broader risk mitigation strategy, a set of activities that are done to protect identities and enable safe data sharing. This is a clear conclusion in many sources, including information commissioned officer, office and others. And I do highly recommend this report, I think I have the link later in the PowerPoint, this ICL report on big data in particular. So that's where we stand. Anonymization is still effective, but simply has to be thought of as part of a toolkit, not as something that solves all problems. And I think quite hopefully, the gene bank was able to take this into account. They were actually able to remove some of those identifying variables that enabled linkage from one of the datasets. It placed more restrictions on the data that retained the more disclosive variables that were able to keep available for public use the less disclosive data. And this is part of what I mean by getting at the idea of a risk mitigation strategy of working with anonymization combined with other tools like controls over access to data to keep it accessible. Okay, let me now look at the third case, Facebook. Again, many people may have heard of the Facebook presentation research on emotional contagion. So in 2010 researchers, Cramer, Yillray and Hancock Cramer from Facebook and two others from Cornell published research that provided evidence that online social networks can transmit large-scale emotional contagion. They demonstrated that reducing positive inputs to user feeds resulted in users posting fewer positive and more negative posts and the opposite when negative inputs were lowered. So it was an important finding, no doubt. The authors emphasized the meaning of their findings. Emotional contagion had been shown to occur without face-to-face or nonverbal cues. Quite significant finding in the psychology research and with a sample size of nearly 700,000. Again, there was quite significant public outcry both in the public and in the research community. Debates about should this have happened at all, was it manipulation, was it consent, are properly consented and so forth. But I want to take a slightly different tag and look at how this situation might have looked from the point of view of the Cornell researchers. Now to be very clear here, I have less insight into this. I don't know any of these people personally whereas I have worked reasonably closely with the social science data lab researchers I was talking about earlier, so I am speculating here. But anyway, the Cornell researchers, one of them was a doctoral student. So I think it's important to note that a significant factor about this research was that it had been presented for ethical review and the Cornell Review Board waived its right to review this research. This project produced such an outcry that editors of the journal where it was published wrote an editorial expression of concern and quoted what it happened. And they said that because the experiment was conducted at Facebook for internal purposes, the Cornell Review Board chose not to review the study. So what I want to try is and there are possibly, again, possibly defensible grounds we can talk about for this. It exempted it because in the US the conclusions are slightly different. The data was deemed pre-existing and not identifying and not publicly funded. Those might have been reasons to accept it from review. But I want to try a different scenario. And this is leading up to a poll, so get ready. What if the data had gone through a kind of ethical review? Because not too long after this event, Facebook actually established an ethical review board. In the announcement about the review board, the Facebook creators of this board describe that it has the same basic formula as any academic review board to avoid bad consequences to participants, to protect privacy and information, and to respect people's expectations. So, would you feel comfortable using data for Facebook if that data had been approved for research by Facebook's institutional review board? Margarita, can we open our poll? Let's launch it. About half of you voted. A couple more seconds and I'm going to close it. Just so you know, there's quite a lot of you attending today and it's about 75 attendees, so thank you for attending. Right, I'm going to close this now. And as you can see, 59% said yes and 41% said no. Okay. Sorry. So, what I want to point out from this example and I will link it back to the poll results is that I think this case raises questions that don't necessarily have terribly easy answers. What counts this research? What should review boards cover? Are we creating does research have to create new knowledge? Does it have to be in the public good? Is pre-existing data safe to use? And so forth. This is a bit of a trick poll and of course I did it deliberately, but I think what's useful to know now there are more details, not many, but some more details about Facebook's institutional review board. Unlike review boards for other institutions and certainly university review boards, the member names who sits on the Facebook board and any information about proceedings is not made public. It is available to Facebook employees, but no one else. So, I do think this does raise questions about whether or not a board can provide the sufficient level of transparency that we might actually be looking for a review board to provide. I'm sure if I had given you that information in advance that some of your poll answers might have been different, but the key point I want you to take away is this idea of provenance. It's got a bit more meaning for us in the archiving world, but really it just means sources. The same kind of thought and attention we give to thinking about the articles we cite, the other materials that we use as researchers, we have to now apply for data. And it's even more critical in this world of wild data than I think it has been previously. So, where did the data come from? And how do you know? How can you actually document where it came from? If you've been told it's gone through certain kinds of reviews or procedures, can you double check that? Do you have the access to the transparency to be able to check out those sorts of things? How did data subjects think the data was going to be used? These are now the questions that we have to grapple with in terms of dealing with wilder data that is coming from new and different sources. Okay, I'll turn to that in just a minute. So now I want to begin to wrap up a little bit and I hope I haven't discouraged people too much by suggesting that these matters are complicated, but I do think they are and I don't think they should be trivialized. On the other hand, lots of work is happening out there and that's a good thing and there's lots of emerging good practice. I don't really like the term best practice because I think it suggests that there's only one, but there are lots of good practices. So let me start with this one. And this is the data science ethical framework that's come out of the UK cabinet office. Of course, every document is produced for particular purposes. This one is probably particularly relevant for people doing data projects in the public sector, in government offices. So there's a strong emphasis on public benefit and that sort of thing. But these are useful principles, I think, regardless of where we might be based doing our research. I want to say just one thing. I did put the word tools in bold for a reason. There are lots of guides, guidelines and so forth, but these things, as far as I'm concerned, have to be thought of as tools. They have to be thought as something that helps us do a job. It doesn't do the job for us. That would be way too easy, but they are tools that help us get work done. Meaning, good ethical thinking about ethical, how to do research ethically. So, like I said, the cabinet office one highlights important principles, public benefit. Being alert to public perceptions, the same kind of thing that the Twitter researchers did. Being open as accountable as possible. Again, particularly important for transparency. I'd also like this document, it goes into detail for each one of these principles and there are lots of detailed examples and cases also. Here's another one. I've mentioned the information commissioner office as well. If you're grappling with personal or sensitive data as defined by the data protection act, almost certainly you're going to want to be looking at activities that are called privacy impact assessments. Again, that's part of the requirements around handling personal data along with things like data minimization and data minimization. And there's a separate guide in particular for doing privacy impact assessments from the ICL that's quite useful. A little broad kind of guides. Now let me look in particular around the topics of consent and anonymization about good practice in those areas. Consent is still good practice just because it's difficult and in some cases not possible doesn't mean it's still not kind of the gold standard to strive for. It's the first best. It's also a legal requirement for personal and sensitive data. But of course there are exceptions and I think again this is where the nuance of ethical work simply has to pay attention. Covert research evaluation research data that's been collected for government purposes that's now handled through entities like our colleagues in the administrative data service doesn't have consent. It can still be successfully used for research with additional protections. In ambiguous situations it's good to seek additional ethical review and there are growing numbers of entities like this one from the national statisticians data ethics advisory committee that are making ethics reviews available and increasingly some university boards are opening up to charities and other entities as places that will provide resources for doing ethical review. And still the objectives are to protect identities, consider user expectations and again the primacy of the duty to protect from harm remains. Just a quick flag, a really outstanding piece of work I think on disability network and social media by Trevison and Riley here is a good example of work done on anonymization and consent. So what about anonymization good practice? Well as I've already pointed out lots of good examples out there that I've got links and details for more of these in the next slide or two of the ONS the UK anonymization network and recently released a book and quite short video on the anonymization decision making framework. Again what's critical is considering the risks of linkage particularly when publishing or coming to share data and remembering that anonymization is not an absolute solution it's part of a risk management package or risk management strategy and this kind of approach is going to be even more necessary under the European General Data Protection Regulation coming into force in a couple of years for those of you who might be familiar with that. So anonymization again more tools and resources I'm not going to go through these in detail but the links are going to be here for you to come back to and and then I'll start to wrap up with just a couple of points here coming back to what I see is the UK Data Services role and how we play in this arena. We are all about trust we see ourselves as being and are also legally entrusted entity handling data that takes a lot of work and a lot of resources again these we provide multiple we provide three tiers of access for data depending on whether or not essentially depending on the degree of disclosiveness of that data. We have a program of five safes for some of our more disclosive data we have facilities that enable a secure lab so users can come here or use a secure VPN for highly disclosive data I've already mentioned which is an administrative data service which is making linkage of unconsented data possible under strict ethics strict approval panels review for research. So it does take work to sustain this kind of trust to enable social research to happen but I think it's essential and necessary. I think that's just to comment very briefly on some future ideas we have as to whether or not we should try to expand ethics training for big data in social research and we're looking to collect case studies for social data researchers. The big data society group in the US has done a great job of this with a focus on cases for particularly for data scientists for computer professionals and I think we need something comparable for social researchers. So let me wrap up here just two more points something about trust and saying thank you and we'll open for questions. O'Neill is a philosophy professor emeritus from Cambridge quite well known some of you might recognize her. She talks about the issue of trust and trustworthiness and she says there's actually a bit too much talk about trust and not nearly enough about trustworthiness. And I think she's right I think that we have a responsibility I think we individuals and institutions have a responsibility to be trust worthy in order to help sustain this network of trust that can enable research to get done because that's what's necessary for big data and this wild data to be used for the kind of necessary ethically ethical social research projects that I think are essential for our future. I have many many people to thank for my work in this area just a few the OECD Global Science Forum report is coming out soon to keep an eye out for that. Sarah Day Thompson at the Digital Preservation Center has produced excellent tech watch reports on preserving all kinds of data. Susan Helford and others at the Web Science Institute my colleagues at our sister archive in Germany GASIS and the data and society group in the U.S. with their ethics and big data research. So again my thanks to all of you and others who I've thanked already for helping me learn what I have in this area. Thank you Libby and thank you to all of you who have joined the webinar today.