 We have reached the final way station in our journey into spatial data science ethics. Our path led us through a spectrum of perspectives on ethics in the geospatial technology field. We surveyed major ethical, legal, and policy issues, which are often intertwined. We read about the emergence of critical GIS and considered its relationship to codes of professional ethics. We analyzed ethics case studies to hone our moral reasoning abilities. We explored perspectives on ethics in the related field of data science. We read about the nature of this new-ish field, as well as the ethical challenges it presents. We analyzed and discussed long-form case studies, which required us to elevate our moral reasoning abilities to accommodate cases presented in greater detail and nuance. Most recently, we considered ethical concerns in the emerging field of spatial data science and from the perspective of organizational ethics. We considered formative thoughts about this new space for collaboration at the intersection of GIScience and data science. We read how the practice of data generation through establishment of correlations across data flows yields unprecedented information about individuals and is responsible for their increasing anxiety and sense of helplessness. Thumbnail cases about a couple of big data analytics firms showed us how easy it is to get in ethical trouble in this field. And another company exemplified the challenge of creating and using geospatial technology for good. In this final lesson, spread over the last two weeks of the course, your focus will primarily be on completing and presenting the term project you selected earlier in this semester. But in this concluding presentation of mine, I want to leave you with thoughts about how an ethics of digital care can help us deal with the stresses of lives and work that are suffused with digital technologies, including geospatial technologies. To begin, I'd like to return to James and Stuart Rachel's overview of moral philosophy. I can almost hear you thinking, here we go again, more old white guys going on and on about moral reasoning. Fair enough. But as the Rachel's point out in Chapter 6, most moral philosophers have been men. To make amends for that inequity, I now ask you to consider a moral philosophy called the Ethics of Care that's associated with feminist scholars. The Rachel's observe that the idea that women and men think differently has traditionally been used to insult or belittle women. However, they add, these days, most feminists believe that women do think differently than men, but they also believe that women's ways are not inferior. On the contrary, they write, female ways of thinking yield insights that have been missing in male-dominated areas. Thus they continue, by attending to the distinctive approach of women, we can make progress in subjects that seem stalled. Ethics is said to be a leading candidate for this treatment. As ethicists are wont to do, the Rachel's use cases to make their points. One case is Heinz's dilemma. Heinz can't afford a medication that is his dying wife's only hope. So he considers stealing the drug. Should he? A couple of 11-year-olds, Jake and Amy, respond to the dilemma in revealingly different ways. Jake, thinking like a male, as the Rachel's infer, sees only a conflict between life and property that can be resolved by a logical deduction, namely, to steal the medication. In contrast, Amy responds to the personal aspects of the situation, as females typically do, the Rachel's inference again. I think there might be other ways besides stealing it, Amy says. They should really just talk it out and find some other way to make the money. The Rachel's note that most moral philosophers have favored an ethic of principle, like Jake's response, over the ethic of intimacy and caring that Amy's response represents. But then again, the authors emphasize most moral philosophers have been men. Men's theory of obligation, the Rachel's generalize, portray the moral agent as someone who listens to reason, figures out the right thing to do, and does it. Sounds a bit like Michael Davis' seven-step guide to ethical decision-making, doesn't it? In contrast, feminist thinkers like Virginia Held argued that caring, empathy, feeling with others, being sensitive to each other's feelings, all may be better guides to what morality requires in actual contexts than may abstract rules of reason or rational calculation, or at least they may be necessary components of an adequate morality. We'll return to the notion of an adequate or minimum conception of morality a little later. At this point, however, after the preceding theoretical background, let's turn to the book Slow Computing by Rob Kitchin and Alistair Frazier, two human geographers at the National University of Ireland at Maynooth. Kitchin is especially well known for his early publications about the web, including The Atlas of Cyberspace, in 2001. Slow Computing follows the example of Slow Food, which describes itself as a global grassroots organization founded in 1989 to prevent the disappearance of local food cultures and traditions, counteract the rise of fast life, and combat people's dwindling interests in the food they eat, where it comes from, and how our food choices affect the world around us. Another more recent spin-off is Slow Scholarship. The manifesto shown here describes it as a similar response to hasty scholarship. Slow Computing begins by stating the obvious. We lead digital lives. Among other troubling statistics Kitchin and Frazier point out that over 80% of people own a smartphone, with the average person checking it about 50 times a day. Is this just the way things are now, they ask? Should it be like this? In this book, they write, we make two related arguments. First, how we interact with digital devices does matter. Digital technologies are accelerating and fragmenting our everyday lives, and the data our devices gather are used to profile and target us. Second, we should step back, even if just a little, to try to seize some self-control. Slow Computing is one possible way you can be more careful about leading a digital life. But it is more than that. Slow Computing is also about seeking and making changes to how our digital society and economy operate and are organized. In Chapter 6 of Slow Computing, Kitchin and Frazier present an ethics of digital care that provides a rationale for the movement they hope to inspire. For us, they write, the principles of slow computing are normatively rooted within the ethics of digital care, self-care and collective care, designed to re-imagine and remake our digital society and economy, so that it protects and enables personal and societal interests. They point out that German companies, including Volkswagen, Alliance, Telecom, Bayer and Henkel, have implemented slow computing practices such as the right to disconnect, because they have realized having less-dressed workers improves productivity, promotes innovation, reduces employee turnover and lost days, and increases profit. As you can read in this Wikipedia article, legislation related to working hours and privacy has been introduced in France, Italy and Canada, among other countries. Kitchin and Frazier's ethics of digital care consists of two primary pillars. The first, they say, is about an ethics of time sovereignty, which they define as the power and autonomy to dictate how our time is spent. The second pillar they write is about data sovereignty, which concerns authority and control over the generation of data about us, the data captured, and how those data are used. Smart cities is a spatial example discussed in Chapter 6. Corporations and the state both promote certain digital futures, Kitchin and Frazier write. For example, smart city advocates produce slick videos and glossy adverts, forecasting the potential benefits of a city saturated with ubiquitous computing. They aim to enact so-called fast urbanism, from fast-track planning to the adoption of real-time management systems. As I've noted elsewhere, John Jyn and colleagues published an illuminating paper about how the Internet of Things enables planners and engineers to design smart cities. Illustrated by a case study involving noise mapping in Melbourne, Australia, Jyn and the team discussed the data collection, data processing and management, and data interpretation aspects of an IoT-enabled urban information system. GIS plays a role in their framework specifically for the integration and visualization of geo-referenced data. Considering the mass of data throughputs generated by the IoT, Jyn and colleagues observe that to make sense of the information and convert it into knowledge, state-of-the-art computational intelligence techniques such as genetic algorithms, evolutionary algorithms and neural networks are necessary. Machine learning, they conclude, will help achieve automated decision-making and provide useful policy. Their vision would seem to exemplify fast urbanism. And ethics of digital care, Kitchen and Fraser argue, promotes turning the driving logic of smart cities technologies away from speed, efficiency, optimization, and technocratic governance towards fairness, citizenship, social justice, and the public good. They call us to envisage what we think an ideal slow city should look like at some point in the future and then try to work out the steps needed over the intervening years to make that city. Another spatial example occurs at the intersection of big data analytics and crime prediction. Predictive policing systems in the United States have been critiqued for practicing racial profiling and perpetuating institutional racism, Kitchen and Fraser right. They argue, there is clearly a data justice issue here in terms of being able to identify, contest, and prevent any discrimination within these systems. This leads the authors to conclude that there is an urgent need in our view for public debate about the ethics of machine learning, artificial intelligence, and data usage. The best surrogate measure I found of public debate at this moment may be a 2020 article by Leila Aucci and colleagues in the journal AI and Society. They used the Lexus Uni database to uncover articles in newspapers, magazines, and web blogs from 2013 to 2019 that dealt with artificial intelligence and ethics. Ultimately they identified and qualitatively analyzed 254 articles. Overall they noted a contrast when compared to television and movie portrayals of AI which often sensationalize the dangers posed by AI. They went on to conclude that the public debate on the ethics of AI is in its early stages. However it appears to be fairly sophisticated and a crucial recommendation is to increase the breadth and depth of public debate as well as participation of relevant stakeholders. You, my friends, are relevant stakeholders. You are alert to the ethical and legal issues that are bound up with the technologies and practices in spatial data science. You have woodshedded your moral reasoning skills. You should be prepared to assert leadership roles in your organization related to ethical concerns. Consider the impact at the organizational level of one company's project selection guidelines. Let's return, finally, to James and Stuart Rachel's notion of a minimum conception of morality. As I hope you'll recall from that reading and an earlier presentation, they argue morality is, at the very least, the effort to guide one's conduct by reason. That is, to do what there are the best reasons for doing, while giving equal weight to the interests of each individual affected by one's action. In our emerging context of spatial data science, it seems to me that giving equal weight to the interests of each individual affected is an opening for an ethics of digital care, collective care, and self-care. I hope that's something you'll want to help implement in your work and workplace. Thanks for your thoughtful attention to this and to my past four presentations. Take care.