 Quick introduction to Jeffrey and we'll get started. So Jeffrey has written several books. He's the author of The Mating Mind, How Sexual Choice Shaped the Evolution of Human Nature, the Editor of Mated Intelligence, Sex Relationships, and the Mind's Reproductive System, and Spent Sex Evolution and Consumer Behavior. And a lot of his research interests are around that intersection between understanding evolution, human sexuality, and consumer behavior, including behavioral economics and the evolutionary psychology of human sexuality. His talk today is going to be less on these subjects and more on the actual process of research, and in particular on smartphones in research. So I'm sure it will touch on these various aspects. And welcome, Jeffrey. Great. Thanks, Judith. How's the sound level on the microphone? Are you getting this? All right. OK, cool. Yeah, I'm really excited about how technology will, I think, revolutionize everything we do in psychology, but also economics, marketing, political science, really all the behavioral sciences. Because almost everyone in the world will soon be able to participate in almost any behavioral research project that you want to run. And they'll be able to do so remotely, electronically, continually in a way that allows you to gather ecologically valid data in very large quantities from very large samples. So the promise of smartphones is really that it's going to connect all these brains out there to your research data. Just a few figures. There are currently about 7 billion people in the world. About 5 billion of them currently have mobile phones. Out of those, for example, is about 1.1 billion mobile phones in China, 900 million currently in India, 700 million in Africa. So the world, including the developing world, is already very highly networked through mobile phones. But of course, only a minority of them at the moment are smartphones where they could potentially download an app that would allow you as a researcher to gather data. However, the number of smartphone users is also very large and increasing very quickly. It's currently about 1.1 billion. As of 2012, it'll be probably about 3 billion 2013. And probably within seven or eight years, we'll have about 5 billion smartphone users in the world who you could potentially reach through your research apps. Here's a graph from Ericsson of the increase in total mobile subscriptions from 4,000 million up to projected 9 billion by 2017. The yellow is mobile broadband users, which is mostly smartphone users. Its actual smartphones is probably about half that, but it'll increase in the number of smartphone users. It'll increase at about the same rate. And compare that to things like mobile PCs and tablets, which are increasing but not nearly as quickly. So if you want to reach people to gather behavioral and experiential data, then smartphones are a really powerful way to do it. Also, there's a lot of money behind smartphone research. And we as behavioral scientists, for example, in psychology or economics, can kind of poach that research and exploit it and use it to our advantage. Because telecoms is a big industry. Revenue last year was about 1.5 trillion in telecom service revenue. That's what you're paying monthly to your smartphone service provider. But even just smartphone sales were about 200 billion last year. And it'll be a trillion very soon. Compare that to the total global size of the pharmaceutical industry that funds most biomedical research. That's a little bit under a trillion dollars revenue. And R&D spending on smartphones, my best estimates last year were Samsung spent about $6 billion on R&D directly relevant to smartphone development. Nokia spent about $1.7 billion. Apple only spent about $1 billion. But compare that spending on R&D within telecoms to the total money spent on behavioral science research by the federal government. National Institute of Mental Health only spent $1.5 billion on all mental health research in the US, less than Nokia spent on smartphone R&D. And National Science Foundation last year only spent about $250 million on all basic behavioral science research combined. So one problem in behavioral sciences is most of the technologies that we use, we have to pay for out of our grants. But smartphones are different because they're already being developed by companies. And we can just piggyback on all of that R&D. And it's very economically efficient to do so. I think also if psychology or other behavioral sciences weren't invented until today, if there was no history, no methodological inertia behind our sciences. And we asked, how would we gather data on what people think and feel and what they do? If we had to design our methods from scratch, how would we do it? Would we invent paper and pencil surveys? Would we invent lab experiments where people do boring tasks in front of computers and cognitive psych labs? No, I think we'd say, well, look, people already have in their pockets a general-purpose supercomputer that has an integrated sensor array, a GPS system that can do powerful media capture, and that's small enough to hold up to your head to make phone calls also. These smartphones, of which they're $1.1 billion in the world are ubiquitous. They're unobtrusive. They're intimate. They don't create the same issues about reactants in terms of subjects knowing they're in a study, subjects quickly acclimatized to their smartphones. And you can reach them remotely, because your participants can potentially download what I call psych apps, research apps that can run your studies, your surveys, your experiments onto their phones. They don't have to be anywhere physically near your lab the way that most psychology research has been run. And these can gather precise, objective, sustained, ecologically valid data on real-world behaviors and experiences of potentially millions of participants. So it's potentially a very powerful research method. But I should give you a caveat that I'm sort of an outsider and advocate. A, I've never run a smartphone study. I've never written an app. B, I didn't even own a smartphone until six weeks ago. So I'm still learning. And those of you who know the technology better than I, I hope in discussion, can sort of help correct and fill me in and make some additional comments. So by the way, if you've got questions of clarification, feel free to butt in during the talk. But if you've got kind of substantive questions, then try to save them for later. So yeah. Ecologically valid means you're gathering data about what people normally do in daily life, rather than just what they do in the lab. So it's valid in terms of, it's relevant to their natural ecology. It's one of these pretentious psychology terms that nobody else knows. So thanks for asking. Yes? About two words, intimate. And what that means there is not one denominator. And so one that describes intimacy in one situation is not needed in another. So what I, all I mean by intimate here is people are used to it. They have, if you have a smartphone, you have a connection to it. You're interacting with it regularly. It's typically carried on your person. It's intimate in much the same way that clothing is more intimate. It's you and your smartphone. Right, that's right. Yeah. You can have quite the opposite of an intimate relationship with your participant if they're on the other side of the world. Yeah, so sorry about that confusion. If you want to learn more about this, I did write a paper for Perspectives on Psychological Science actually last year, sorry, 2012. We did a symposium on this stuff that the Society for Experimental Social Psychology in Austin, Texas in October. We're doing a symposium at Association for Psychological Science that I've organized for this May in Washington, DC. And even Association for Consumer Research will have a symposium on smartphone research methods. So this is getting out into different areas of the behavioral sciences. And obviously, people will sort of pick it up and use it according to whatever their own research questions are. So briefly, I want to give you a sense of what smartphones can do, for example, for psychology in terms of how their technical capabilities relate to their power to gather data about experiences and behavior. So the focus is not just on what's the machinery inside, but how could we use the machinery. After I run through that, for one example, smartphone, then we'll get into some of the social and legal and methodological issues that I hope will launch the discussion for us. So my sort of example of what a smartphone can do now is one of the most popular smartphones, the Samsung Galaxy 3. This is no longer state-of-the-art, but it's not that far behind. It's a particularly notable example because it's selling like hotcakes. Samsung sold 20 million of these within the first three months of release in June last year. And their current rate of sales for Samsung Galaxy S3 is such that Samsung's making about $1 billion a month in profit just on this one model, which is one of the most successful consumer products ever. So what I mean by intimacy is the people who own these tend to carry them throughout the day. It's a fairly small device unlike a tablet or a PC. It's light. It's fairly reliable with a fairly long battery life. And that all allows you potentially to do continual data gathering where the person doesn't even have to actively interact with the smartphone. You can perhaps potentially gather passively the sensor readings like the accelerometer and the gyroscope and the compass, the GPS location. And you can also passively track what apps are they using, what information is in those apps. Who are they calling and texting? Who are they communicating with? What's their social network? And because the smartphone tends to be a very personal and personalized device, it becomes much more familiar, trusted, and unobtrusive than many other ways that a psychologist might have of interacting with a subject. So people will habituate to your psych app being on their phone and passively gathering data. And that means after a while, they'll probably behave more or less normally rather than behaving as if they know they're in a study. And that also leads to a high degree of ecological validity that if they're behaving the way they normally do, then you've got a window onto human behavior as it normally happens. Second, things like the Android operating system are important for gathering data because, A, it allows potential participants anywhere in the world to download your app for your study, where they are, without having any direct physical contact with you. Second, with Android and I think soon or now with even iPhone, apps can run in the background without popping up and interfering with other apps for annoying subjects. So if you look at battery usage on your smartphone on an Android, there's always 20 apps running in the background. And one of those could be your psych app gathering data. Second, if your app has deep root access to the sort of mechanical interference of the smartphone, if the participant authorizes that, then your psych app can potentially access everything else that's going on in the smartphone. All the phone calls, emails, the web browser use, Facebook use, gameplay, whatever the subject's doing. And you can also have root access to everything in terms of the hardware and the sensors and what they're registering as the person goes around their daily life. Finally, unlike online studies, where you've got a slow ping that's like 60 or 80 milliseconds, if somebody's doing a task actively on their smartphone, your app can actually track their timing, like reaction time, with millisecond accuracy. So for example, DeFowl et al. did a lexical decision app that required millisecond timing through downloaded smartphone apps so that you can run perceptual or cognitive studies through smartphones with an accuracy that you could never run online through mechanical Turk. Breaking out the thing and looking at what is inside these anyway, those of you who haven't opened up your smartphone might not know everything inside. Well, one thing that's inside is a pocket supercomputer, a computer more powerful than any computer that existed on the planet 20 years ago, right? For example, in the Samsung, a quad core, 1.4 gigahertz processor, it's probably faster than whatever computer is sitting on your office desk. At least if you've been a professor for a while or if your professor hasn't had a grant for a while. The CPU can run quite sophisticated, sort of mini versions of AI, like speech recognition. Anything you have the capability to program, you don't really have to worry than a modern smartphone. Oh, maybe it can't run it, it's not fast enough. No, it will be fast enough. And current 4G broadband also means even if the smartphone itself isn't quite up to running your study or doesn't have quite the onboard memory, you can access cloud computing resources for virtually unlimited processing and memory. So it's not just what's on the smartphone, it's all the cloud computing resources your smartphone can access that your PsyCAP could potentially tap into. Second, people tend to use smartphones as personal data repositories and as memory sizes keep increasing, quite rapidly people will put more and more on their smartphone and empower their smartphone to access more and more background data through the cloud. What that will include soon would be for example, somebody's medical records. So if you're running a PsyCAP study, you can ask, do you give informed consent for us to access anonymized medical records? People will have maybe structural MRIs of their brains if they ever had a brain scan. If that's helpful to have accessible in case you're in an accident and the emergency room needs it. And increasingly in the era of consumer genomics people will have their complete genomes on their smartphones. Because a genome doesn't actually take up that much memory on a modern smartphone. And that means whenever you're doing a PsyC study you could also ask people, do you give us permission to access your anonymized genome? And that'll be a windfall for behavior genetics. In terms of getting jobs, people might very well put validated school records, legal records on their smartphones or have access to them. Smartphones will include increasingly a lot of access to spending patterns, which is relevant for consumer behavior. And you could easily ask people, can we access your credit score if you're a subject in our research? That can be quite useful. You can access contact lists to develop a social network model for each player. You can access your music tastes, their photo albums, their home videos. All of this will become available through this wonderful clearing house in the individual smartphone. You can also run experiments and surveys through smartphones, right? They have touch screens. They have button taps and gestures. You can have a virtual keyboard. Now with swipe, people can type pretty quickly, even without a blackberry style keyboard. You have microphones that can record sound and that can record it even using noise cancellation. And modern smartphones typically have at least two cameras, right? A rear camera facing outwards. That's typically very high resolution. And a front camera that can record facial expressions, potentially eye tracking patterns of subjects. So as you're running a study, if you're presenting visual stimuli, you could potentially have that front camera do eye tracking. Smartphones themselves are fairly small and limited in terms of the quality of the speakers and the video output. But they do typically have stereo headphone jacks that can drive high quality headphones so you can present auditory stimuli. And also increasingly, the micro USB ports on smartphones can drive any size HD screen you want, including 3D HD screens. They can drive eight channel digital audio. So any quality of video and audio you want to present to subjects, if you ask them, hey, part of this study that we're gonna pay you for, we'd like you to hook your phone up to your local TV for this perception study. Can you do that, please? And increasingly, that'll be done through Bluetooth, not even through cable. So that'll be easy to run. The communication capabilities of the phones are such that obviously you've got broadband internet access which makes it easy to upload subject data to your lab computer. But also, the fact that the smartphones have Bluetooth and YTI means they can sense nearby electronics. So if I had a psych app running on my smartphone, it could sense all of your smartphones in the room, all of your Wi-Fi enabled laptops. And that would allow it to sort of pull am I in a socially dense environment or am I in a socially non-dense environment? So you can actually track people's levels of social interactions through those capabilities. Also, as smartphones get near field communication abilities where people are swiping the smartphone to pay for things rather than using a credit card, you can potentially track payments and purchases and even things like the London buses and undergrounds will soon be using NFC. So you just sort of swipe where you're going and you can potentially track where people are. The GPS receivers, which are on all modern smartphones also allow you to access geolocation data. The GPS receiver chips are really very small, smaller than a match head. And that means you have a whole new window onto people's spatial behavior. And you can also do context aware experiments. Like you can know, well, we've been tracking this person for a week. We know where they live and we know where they're at work and we know their favorite coffee shop. So when they're at the coffee shop, which is when they're most likely to have time to do a survey, that's when we launch a survey and we go, hey, do you have a minute? We'll approach you now. You can also tell when they're driving or when they're busy and don't wanna be interrupted. The onboard sensors that modern smartphones have include these, the three-axis accelerometer that can potentially track their motion, their activity type, and that can be used to infer things like how energetic are they? What's their health level? What's their mood? These are currently being used, for example, to track depression. Where if somebody's just not moving around very much, either their smartphone's not on them or they're stuck in their house not doing anything and then maybe their psychiatrist needs to call them. The three-axis gyroscopes, the barometers, the digital compasses also give you a lot more insight into what people are doing and would potentially allow you to figure out are they going up or down stairs? Because the barometers now have a measurement accuracy of about plus or minus a couple meters. What direction are they facing? Are they dancing or walking or sitting still? And increasingly, their external sensors that will allow you to do quite a bit of remote physiology and even neurophysiology. These are fairly crude at the moment like the emotive epoch 14-channel consumer EEG headset. It's $300, several thousand people own these. Potentially, you could do a psych app where you get EEG data through this as people are doing some task, walking around in daily life. And this is quite unlike coming into a neurophysiology lab and having people sit in an EEG setup, setup in an office. There's also a lot of pressure from biomedicine now to promote the development of wearable or implanted or injected biosensors to measure physiology like body temperature, blood pressure, blood alcohol, drug levels, hormone levels, immune system activity, inflammation, stress, et cetera. And a lot of these are really very, very small. For example, here compared to a penny is a one cubic millimeter intraocular glaucoma pressure sensor that you can inject into the eye without interfering with vision that could detect is intraocular pressure getting too high is that a danger if you've got glaucoma? So probably within a decade or two, a lot of people out there with smartphones will also have a whole battery of these biosensors in their bodies, on their bodies, around their bodies. There's a big alliance called Continua Health Alliance, a trade association of about 220 companies at the moment promoting mobile health that's driving the development of these biosensors that certifies Bluetooth biosensors that can communicate to smartphones. These mostly at the moment do things like monitoring heart, ECG, blood glucose. There's even an ultrasound imager. But there are a lot of companies with deep pockets behind this. So it'll soon really be possible to do a lot of psychophysiology remotely with people who already have these in them and around them. And indeed, I'm advocating that psychologists and other behavioral scientists really start talking to the people developing mobile health applications. For example, this OpenM Health Consortium, which is pushing the development of open source software and platforms and standards for these sort of biosensors and these mobile health applications. And they're happy to work with psychologists and they want to and they want some guidance about how can we incorporate behavioral measures into what we're doing so that we can particularly focus on things like behavioral health, mental health, post-traumatic stress, pain management and stuff like that. So I want to, in the last few slides, erase some sort of practical, ethical, methodological issues. Clearly there's some technical challenges with smartphones and some that I'm sure I don't know about yet. You can have conflicting apps that are greedy for processing and memory resources. Limited battery power is an issue. If your psych app requires continual GPS tracking, that's a big battery drain. If it requires continual Wi-Fi access, that's a battery drain. If it needs Bluetooth on all the time, that's a battery drain. So you need to think about that. Some of the sensors have fairly limited accuracy, but they're getting better. If your psych app is running in the background and it's cycle hungry, the smartphone physically gets pretty warm and heat dissipation is a problem and will become increasingly a problem as CPUs get faster. Of course, in some countries where you might want to reach participants, the telecom service is fairly unreliable. And then there's challenges from participant behavior. How reliable is this data gonna be? We now know that mechanical Turk data is pretty good actually, surprisingly. We don't know about smartphone data. People often forget to charge their smartphone or forget to carry it around or leave it. If they start the study and it's a one week study, not a problem. If it's a six month study, they're quite likely to lose or upgrade their phone during the study. Does that mean you lose all their data? How do they rejoin your study with a new phone? They might lend the phone to others so the person you think you're tracking isn't really your participant. Malicious hackers might take advantage of your apps and people might have some reactivity and self-consciousness about your app tracking some of their behavior depending on what they're doing. You can potentially recruit anybody in the world who's got a smartphone, but that global recruitment potential is also somewhat limited by, at the moment by selection biases who own smartphones. As I pointed out, that's increasingly and soon gonna be everybody, but it's not everybody yet. There's some incompatibilities among different operating systems and sensor drivers and hardware specs. That means at the moment, if you develop a psych app, it's probably only gonna work let's say on Android and on particular models of smartphones. So those are the people you need to recruit. It's hard to program an app in multiple languages so you might only recruit in, let's say English and Hindi at the moment. The geographical information system databases that allow you to do good GPS tracking at the moment are very good for North America and Europe but not so good everywhere else. And there's also issues about how do you pay subjects globally and what are the relevant exchange rates and how much payment is appropriate versus coercive in terms of human subjects. And then there's practicalities about programming these apps, right? How do you do it? Most psychologists don't know Java or Objective-C or whatever you guys are doing now. And we need that, basically we need computer scientists to develop what are called dev kits, app development kits that make it easier for us to program research apps for smartphones. And initially that'll require collaborating quite a bit with computer scientists and user experience experts, smartphone manufacturers, perhaps telecom service providers. There's also data analysis challenges and this is I think gonna create some of the biggest headaches. Here's one example study. Nico Kukinen and his collaborators, here he is at Nokia, did a little humble smartphone study in 2010. They just tracked 168 smartphone users for four months each and they gathered as much data as they could from those users. The result was data flood, 15 million Bluetooth scans of nearby devices in the rooms, 13 million Wi-Fi scans, five million GPS records, four million app usage records, many, many sensor readings, audio samples, voice calls, text messages, even just analyzing the 2000 videos shot by the users would have been a coding challenge, right? So the problem is gonna be too much data very quickly. Little comparison I developed here, if you look at the data coming out of CERN's Large Hadron Collider, it produces about 300 megabits per second of raw data output. Handled by 200,000 processors and 150,000 terabytes of disk space across 34 countries and they have to store about 15 petabytes a year. Well, if you ran a study where you had 70 participants, like these 70 people, recording just one hour of high definition video per day using something like these little glasses, right? Little pivot head, 180p, 30 frame per second recording glasses that you can buy for 350 bucks. 70 people recording that amount of data equals that data output in the Large Hadron Collider, which is a lot of data. There's also some human subjects and IRB challenges. One worry I have is that commercial companies like Facebook and YouTube and Netflix and T-Mobile will start recording a lot of behavioral data, anything that's useful to them. And we academics will be hamstrung by current IRB committees, so we can't do anything like what industry's doing. One problem is even if you say we promise anonymity with your data, if you're gathering GPS and app usage and call logs and sensor logs, even for a few days, you can figure out who somebody is. It's not credible to promise anonymity given this amount of data. Truly informed consent is hard to get because people don't realize how much you can tell about what they're doing through their smartphone. People now tend when they download apps to just click, accept your terms and conditions without reading them or thinking about them. And if they do that with your psych app, that gathers an awful lot of data about them, yeah, they'll be willing to do that, but it's not gonna be truly informed consent in any sense that we psychologists are used to thinking about it. Certain, yeah, I won't even get into what Huawei might be doing with your data through their servers, but that's an issue. It's also unclear even what the IRB and recruitment rules would be. IRB committees, like the one I've been on at University of Mexico for two years, would have no idea how to even think about global recruitment or accessing vulnerable recruits. If everybody's got a smartphone, that includes children and prisoners and the mentally ill and the mentally retarded. How do you protect them? A lot of IRBs now ask, where are you gonna store the data? Which computer in which lab room on campus? How is that door going to be locked? They literally ask that. If you're storing your data on the cloud, how do you even explain what you're doing to them? And if you have a malicious little grad student in your lab that figures out, hey, given this amount of data, I could blackmail a participant. Oh, look, they're having an affair. Oh, they're cheating on their taxes. That might be a better revenue stream than their graduate student stipend. So that's an issue. So I wanna make sure we have time for questions. There are pros, there are cons. I know, well, I hope some stuff you guys might be able to help me with a little bit in the 35 minutes we'll have for discussion is thinking through some of the legal and moral issues, the human subjects, protection and privacy and data security issues, economic issues, like how are we gonna pay subjects? What are appropriate amounts? How do you handle exchange rate issues and economic inequality issues? And also, what are some social issues that might arise once we have much, much more accurate views of what people do all day? How do they behave? What kind of surprises might be in store once we enter the era of behavioral big data and personal data? All right, I'll stop there and we can shift into kind of discussion mode. Thanks. That was a very good explanation of the pros of this sort of idea, but I'm afraid I have to start off by saying that it's all sort of appalling, really. The implications of something like this are horrifying to contemplate and would really, really, I mean, so a few things leap out. One is that your software, in order for me to trust it, I would want to see its source code. If you're releasing the source code, it can and will be repurposed by malicious actors for their own ends. If you don't show me the source code, I can't trust you and I shouldn't run out on my device. Also, the issues of gathering information that is reasonably related to criminal activity of varying sorts, how do you handle that? Assuming you got a study of a reasonably large size, everybody would want this information. It would become instantly the most single attractive database in the entire world to attack. So all the way down the stack from your network to the individual participants, you'd be facing terrifying data security issues, which, I mean, I like the academy a lot, or it can be academy, but I don't think that they have what it takes to counteract advanced persistent threats from Chinese intelligence, to say, for example. Hence the Huawei reference. Sure, but I'm not actually as worried about Huawei as I would be about the Department of Justice, the IRS, or any other agency with subpoena power. Yeah, yeah, and I addressed that subpoena issue a little bit in the paper. But yeah, if one of your participants has a geolocation track that puts them at the scene of a car accident, right, and Justice Department subpoena is your data. Well, there are IRB mechanisms now that give the researcher some protection against that, but it's not very strong protection. And you're obligated to comply with it. If you store the information in a retrievable way, it will be retrieved out of the squad. Not under certain federal IRB guidelines. But, well, it's a gray area. And one reason for publishing this and talking about this is to get these issues out in the open before the stampede. Couldn't you just give people cell phones? Wouldn't that be substantially more ethical? That's the way a lot of these studies have been done so far. You give people PDAs, you give them limited capacity, cell phones, the MIT Reality Mining Project, distributed devices to a couple hundred people. But if you want big samples and you want representative samples, and you want people behaving normally rather than self-consciously, then you really need apps that are downloaded. And even making your source code available, that'll be helpful to the 0.01% of people who understand source code. But it's essential if you want people who trust, who other people trust to say, this is not a virus. You mentioned earlier on in your talk about participants not being aware of it. Now, an application that collects personal information and sends it to a central server where a person is unaware of it is not a research tool, it's a virus. Well, then most apps on your current smartphone are viruses by that definition. I mean, no, because that's what's already happening with most apps. With my relatively informed consent. Like, I'm certainly not, I'm aware of all of the applications that are running on my device even if I don't understand the details of how it works. I don't think 99% of people who have the Facebook app on their smartphone have absolutely no idea what information it's collecting. So let's move on to another question. Yeah? I mean, the other way to put this is if you were standing here representing an algorithmic system, you'd be describing the perfect surveillance regime. And I think there's a lot of, you know, emotional risks. If you were standing there representing, you know, an intelligence agency, you'd be describing the perfect surveillance regime. And I mean, I think the emotional response to that is natural. And the intellectual response to that, it is natural. I think the, and, you know, to say that, you know, well, all the apps are doing this. So mine, so this is okay. I mean, you're suggesting a much broader reach than any one individual app. And it isn't okay anyway, even if it was as narrow. I mean, one expects, you know, the academy to have higher standards than Facebook. Well, we do. Maybe, I mean, I'm presenting this as something that will happen that we should think about before it starts happening on a large scale. From a certain point of view, psychology itself is a surveillance technology, right? That's what we do. We like to watch people and understand them. And hopefully we do it for purposes of, you know, scientific insight rather than political control. But you're right, that all of these research methods that psychologists will be developing avidly for our purposes will be easily repurposed for many other purposes. And I mean, I spent decades working for behavioral epidemiologists. So, you know, the, you know, the science nerd in me says, yeah, this would be great. It would have solved all these problems, you know, but at what cost? Yeah, sorry. I think there's certainly some very major concerns here. So I think one of the things that might be useful is to try and find some of the planes where we can look at it and say, where does he, where exactly does the concern stem from? Is it to protect the users as a whole? Is it, what does it mean to be informed? So just as a thought experiment, there's the whole quantified self movement where people are working very, very hard to develop applications that essentially do this for their own use. So can we take some of these concerns and say, imagine an experiment in which all of your users were adherents to this notion of quantified self. They were also going to get this data themselves. Now, given that they're very informed, they know what data is being collected. They want that data collected, but in return they're going to send it to the study. Now, what, what are your concerns in that situation? Yeah. Just sort of a, I guess, kind of a comment on the security. I mean, I don't really think anybody reasonable person would think that the CIA or Chinese intelligence agencies really need help from the academy to spy on people. I mean, Huawei and everybody have, you know, you know, there's plenty of ways of, you know, sneaking updates onto people's phone to have applications that monitor them in the background. I mean, it's a, you know, it's a tolerate government wants to do it. They're going to do that. I guess in terms of one sort of question I had you mentioned a lot of Android there. You haven't mentioned Apple. Is that sort of because Apple has a much more strenuous approval process to get their app in the app store and onto the phone? Yeah. Yeah, just because it would be much, much harder, I think, to develop psych apps for Apple given the approval process given, well, at least when I wrote this paper, Apple didn't multitask very well so you couldn't have background apps running effectively. And that would, that would make it very hard to run these. So, and also the fact that Android's open source and the fact that it's quickly become the market leader is also pretty compelling. So it's not like I hate Apple, but you know, if I was developing a psych app, then it would be an obvious choice. You do Apple not, you do Android not Apple. Yeah. It seems to me the commercial application of this data is actually gonna be brought about perpetuated by people who want to get the data out and a parallel I can draw a few years ago back at home, I got something in the mail from Progressive Insurance that said plug this into your car computer and you can discount. Well, now that discount has turned into the lowest year pricing. So I can see the people who are like, no, look at me, I have nothing to hide. I'm not in bars of the night till two. I live a very healthy lifestyle and sure, I'll let this app run if I can get 10% off my health insurance. Yeah. Yeah, I mean, there's another problem here is, yeah, these social issues, like for example, once a lot of, once millions of participants start to understand what can be tracked through smartphones, that means millions of spouses will start to understand how they can track their husband or wife. Whether kids or... And employers will understand they can track their employees and everybody will suddenly figure out this isn't just for psychology and for totalitarian governments, it's all of us potentially spying on everybody who matters to us in our lives. And that, you know, that's an issue. And the insurance companies will want to monitor your behavior. I'm like, oh, do you want the lowest to your life insurance? Well, we need to see how you drive. And we need to monitor your accelerometer readings and make sure you're not jumping up and down too much or whatever. Yeah. One question that I have, and not being an academic, I'd be curious as to your insight. You gave the example of IRB is describing where you're gonna lock this material, referring just more physical material. So if a company is like Facebook that have millions of values and teams of engineers struggle with keeping data safe from a technological standpoint, what are the challenges that you see in terms of getting academics and universities up to a level of technological security and proficiency that would make this at all feasible? It'll be feasible. It just won't be ethical until we do that, right? And there will be several years where people run these kinds of studies and there will be data leaks. There will be liability issues. There will be subpoenas. There will be cases of blackmail. There will be massive embarrassments. And the fewer of those we have, the more of them we preempt, the better. But yeah, we need to start thinking now about, well, what would an appropriate level of data security actually be for this kind of data, not just for the 200 paper and pencil questionnaires that we're normally used to locking in a lab filing cabinet? Yeah. So you're gonna ask what's kind of possible and what needs to be coming up to be possible. I think the approach of kind of saying, well, this is gonna happen no matter what. And we just need to think the consequences to me is a little bit too technodeterministic. I mean, we're still in a space where, especially in this space, Berkman, where we're kind of talking about regulating technology. And we're also in a space where we talk about moral implications, what kind of lives we value, what kind of lives we want to give people the option to choose. So what in this kind of, for some people utopian, for some people dystopian future, how much possibilities to choose will people still have once social norms are shifted towards the idea of, well, of course that, you know, of course your health data isn't private in that sense. Well, if you're not opening it in that way, then surely we can't give you insurance. You know, we're getting to a space where these boundaries might be shifting and they need to be, as you say, openly negotiated. I think the role of the academy, however, is not to basically orientate themselves towards the kind of tech companies, some of them who have a business model to exploit data. But the role of the academy is to, A, discuss this, but also, B, to question very carefully whether we are actually throwing the sort of the legitimacy of the academy behind a particular endeavor. So I don't think we can, as academics, just kind of go, oh, this is the trend and we need to play a catch up with Facebook. I think we need to kind of stand back and say, look, there will be some of our researchers who use this, this process. We need to update our IRB procedures, but we need to take into account that we can't act like Facebook because we're the academy. We don't do that. And there'll be others who are basically saying, no, no, no, you know what? We should do interdisciplinary research to try and find ways to regulate this stuff. We should do work with technologists to think about how to subvert these things. You know, I think we need a much more diverse approach rather than just being sort of a defensive position and saying Facebook's got X amount much more money or this tech company is pumping this much more money. And the academia has got other kinds of A values and B, things that are valuable. And legitimacy and multiple voices and considering the kind of public benefit is part of what academia does, but what we can't get in business necessarily. Yeah, so there's a whole spectrum of responses. One response could be, until we figure this out, maybe there should be a total moratorium on psych apps and psychological or economic data gatherings through apps till we figure out these issues. One thing I'm very concerned about is we have an extraordinary double standard in terms of informed consent between what academics do versus what companies do. And the limited amount of market research consulting I've done with companies, they don't have any of these IRB constraints. I think it would be sensible for academics to push to have a uniform set of standards. For revealing all behavioral data, whether it's to Facebook or to a single researcher or to the IRS or anybody. But that would obviously take quite a concerted political effort in the face of some very large and powerful companies. I would certainly support that because it's extremely frustrating for us researchers to know that oh, if we were in the private sector, we could run this study in a day with M-Turk. No approval, no worries. I could write an app that would gather all this information and call it a live wallpaper and people would download it and it could gather anything I want. No IRB oversight at all. But as soon as you step into academia, suddenly the ethical standards go through the roof. And that inconsistency is a big problem. Yeah. Thank you so much for all the data questions. So really my new scope into the world that we haven't had before and we're a volatile environment. But I don't know a whole lot about psychology in general and I was wondering if you could explain how psychology deals with this really minute scope in a way that people forget about it. I mean, is that good? And do you want the participants to forget that they're being? Okay, so, but you don't always, right? Like you don't want them to accidentally disclose information from a really intimate experience that they would say, oh my gosh, I would never have ever told a single human, right? So that if you could draw some line between knowing that you signed up, forgetting about it, but then when red lights flash, that would be kind of the perfect as long as you secure psych app environment. Yeah, historically the big problem in psychology has been reactance, which is the participants know they're in an experiment or know they're being observed and they behave differently as a result than they would naturally or normally. And so we try to overcome that by trying to habituate them to the experimental situation or creating a situation that's close to naturalistic. They're also in the paper I mentioned that this issue that you'll need to give the subjects authority to kind of go back and go, you know what? I want you to delete the last six hours of my GPS location data. I want you to delete the last four phone calls. You need to give them that level of control and any app that doesn't allow that kind of perspective or retrospective data management actively by the participants, it would be a problem. Yeah, Steve. Thanks. Very balanced. I guess I'm posing a question both for you and maybe for a lot of other people in this room, which is, has anyone studied the extent to which technology really drives the invasion of privacy and other sort of liberties issues as opposed to pushing in the other direction, legal protections like search warrants and both searches and invasions of privacy, social norms and data deluge who has the time to go through all of this good data. And it's still the case that our best artificial intelligence is not terribly good at extracting humanly meaningful signals out of the huge amount of noise either from speech or from video. So to make it concrete, just rewind the tape 10 years ago, forget about smartphones. Even the technology about 10 years ago, if you had imagined, say in the late 1940s after 1984 came out with the experience of Stalinist Russia, what technology would be available even before the smart phone? Just a close internet, a close circuit. Oh my God, life in the 21st century would be intolerable. It'll be like Stalin on steroids. People will just be so miserably oppressed no one will be able to say anything. That kind of hasn't happened. Stalin with his very primitive technology of informants and spies, people will always come together. They'll always exchange information for social animals. So there's always gonna be a way of infiltrating that. All those film noirs from the 1940s with the private eyes with the telephoto lenses and so on, so that you don't need a whole lot of high technology to spy on people. One would have predicted back then that it would be impossible for anyone to have an affair, for anyone to cheat on their taxes, for anyone to spread gossip, for anyone to criticize the government because it would be 1984 only worse. Now how come that hasn't happened given that it would be trivial for the government to install a TV camera and a microphone in every bedroom in the country? So that hasn't happened. Why not and what lessons can we draw from the tug of war between what's technologically possible, what is either legally normatively or practically pushing against that to make predictions about what is the worst case scenario of that as a psychologist. I'm thinking I could exploit a tiny fraction of those capabilities because who's got the time to analyze video and GPS data for infinite number of variables. I typically have an experimental question with a binary answer. I wanna do a t-test. I don't wanna spend hundreds of thousands of person hours analyzing it. That's gonna put a natural limit on what I'm gonna do and that may be many other natural limits. So is the worst case scenario that the technology would seem to be militating toward just gonna reach a bunch of natural limits in terms of social norms. The same reason I could go through my wife's purse, I could read her email, I could open her physical mail, but I don't. And likewise, it'd be very easy for Harvard to read 100% of my email. And then it's true. It's just something that's happening here, it's true. What's happening here? I'm sorry about that story. Maybe I'm living in a fool's paradise. I doubt that Harvard's gonna read all of them, the 100 emails that I send out every day. Who cares, who's gonna, you know. So in this push and pull, we have reason to think that not everything that's technologically possible is gonna happen. Are there any guidelines based on the past as to how bad this could get? Yeah. You all right, Altona? Yeah. I know Judith has thought quite a bit about this. So I think there's a couple of different pieces. I think some of these privacy issues, for instance, how concerned one is about Harvard reading your mail have to do with a position of power. And so, you know, perhaps a esteemed professor at Harvard has less to worry that Harvard's gonna hold something against them than a much more lowly employee or someone they were hoping to fire. So I think that a lot of privacy is always tied up in power issues. But I think there's also a lot of what the technology and what you were talking about originally about social norms has to do with awareness. And I think a lot of the reactions here have to do with what is, in some ways, always a competition in psychology between subjects of your awareness of being studied. And that part of what is so fascinating about this for a psychologist is that it gives you this ability to study people with very, very little awareness of what's going on. Either they don't know that it's something they really don't know about it or even if at some level they were informed about it, the whole point of doing it with the phone is to make it easy for them to forget about it and to behave in ways and reveal things that otherwise they would choose not to do. And so I think that where it comes up against a lot of other technological issues with privacy online in general have to do with our usual understandings of privacy are very physically based. And a lot of things around phone technologies and online are about we behave in ways because our understanding of our audience tends to be very wrong once you move into a digital realm. And so you behave in ways that are often not noticed because you don't have bad repercussions but for people especially in more powerless positions can have really bad repercussions or they have subsequent ones when you're trying to get hired or trying to get insurance you don't realize how much you've revealed. So I think that's a piece of it. And here a lot of the questions I have is you do seem to go back and forth between this notion of informed consent and talking about downloading a wallpaper and being unaware of it. So I think that's one thing it might be useful for you to clarify because there's that other line of saying if people were to see their own record they would be informed in ways that now we're very seldom are. Yeah, one really tricky thing about the informed consent is that you have to describe it at the appropriate level of analysis that people care about. So if you say we're gonna scan your accelerometer readings that sounds innocuous. If you say we're gonna be able to figure out just how lazy you are, right? And how far short of your exercise goals you are. If you say we're recording your geolocation that sounds innocuous. If you say we're gonna be able to tell exactly how often you go to your liquor store, your cocaine dealer, your mistress, how late you are picking your kids up from school. Your wife might wanna know that if she's divorcing you. We're gonna be able to tell how often you break the speed limit based on your velocity data. That's the relevant level of analysis. And so it's not just what sensors are you recording from. It's also about you have to divulge what kind of stats you're running on them. At the moment, the data deluge is so overwhelming that it gives you this kind of protection in a sense that like, yeah, the CIA, well, the NSA, right, is recording every email in America. They just don't have the manpower to analyze it all. Or the AI to do it automatically yet. But once they are able to convert the data deluge that's overwhelming into a trickle of truly useful and interesting and revealing data, then that's a problem. So, yeah. I think Professor Panker hit sort of one of the sort of key issues here, which is, and he's talking about hypothesis-based testing and you're describing data mining. And I think a lot of the conflict comes from that. And I think you also, when you say, it's sort of a false dichotomy, when you say, well, we can ban all psygaps until we figure this out. And I think that's at some level a bit silly. I mean, if there's a perfectly legitimate psych experiment approved by an IRB and a smartphone is the censor to do it with, why not? But there's a large gap between that and saying, this is what we can do and we need to think about it. But respond to the privacy thing. I sat through a wonderful symposium on the law of privacy here at the law school. And those scholars would tell you that we already live in a surveillance society, that the data is being captured. And if you're looking to stall in 1984 as the model, you're missing it. So, I mean, things are not quite as sandwiched. Yeah. Apologize. No, obviously, there are lots of concerns and this takes a lot of thinking through. And you're certainly right that this is for it. To some extent and in some ways, this is going to happen. But the way in which it happens and the way we talk about it is important if only because you mentioned the social concerns of awareness of ubiquitous collection of data and the like. That actually meaningfully affects everybody in this room's legal rights. As far as I understand it, the capstone holding of the chain of privacy law cases is that reasonable expectations of privacy and the reasonability of that expectation is the important inquiry. And to the best of my knowledge, there are no cases pending anywhere where the issue of, well, you're carrying around this data collection device that you totally consent to and you're like peripherally aware of this, that and the other thing, like your expectation was not reasonable. So when we do things that promote or sort of countenance anything but a very stringent standard of informed consent, you're corroding everybody else's privacy rights. Yeah, this is one reason I'm worried about the pivot head glasses, right? If you guys saw the YouTube footage of the Russian meteorite, right? It was recorded so broadly in Russia because everybody in Russia has the dash mounted recording video camera for insurance purposes because there's so many drunk driving car accidents in Russia. So that's great because we got all these observations of the meteor as it came down. Well, I think it's quite likely that soon the same logic will push everybody to have HD recording video all the time from their glasses. Just because you never know, you could get into a fight, somebody could attack you, but there could be a mugging. You might see something that it's profitable to upload to ABC News, right? And once everybody's recording everything and if they're also streaming it to your psych app for analysis, then it's not just their informed consent that's the issue, it's everybody they see throughout the day, all day, every day. And then privacy is gone, at least in public spaces. I don't know what the solution for that is, yeah. Especially if Apple comes out with a crazy wristwatch that does all and sits always there, you don't have to pick it up and hold it, pick it out of your pocket or anything like that. It's totally there and hopefully being waterproof and all that sort of stuff. Yeah, so the smartphone is not necessarily the final form factor, right? This is supposed to be the year of the smart watch. Right, where, yeah, it's all shrinking down. Yeah. Just back to the issue of commercial companies and all this, is there anything similar in psychology to the 2007 article by Savage and Burroughs, The Coming Crisis of Empirical Sociology? Do you know that article? It's sort of made a lot of rounds in sociological circles. It's saying that companies now have all this data completely unconstrained by RB, they can analyze the data. And in fact, you have sociologists and other social scientists going into companies now and are on the Facebook data team or research teams of other companies who, again, can maybe made a devil's bargain that they can access all this data and they can satisfy their personal curiosity, but they're no longer working in the public interest that academia tries to do. Is there any concern about psychology that, oh, companies are poaching us from academia and putting us to use for evil ends of just figuring out human behavior so that we can exploit people as best as possible? Or is there really no fear that companies will actually be able to do psychology well enough to exploit people? Oh, I have a huge fear that the best behavioral research currently being done is being done by Google and Facebook. And that psychology is gonna become largely irrelevant to understanding human nature and that the deepest insights into the human behavior, at least as it's commercially relevant, are gonna be privatized. Yeah. I wanna ask you whether or not this methodology is appropriate for any given research project. I mean, one of the things that doesn't answer is the why questions, which tend to be what I'm more interested in, which tend to be qualitative and ethnographic studies where you actually go and immerse yourself in the world of people and try to explain why they're doing the things they're doing. And I think that these kinds of data sets will be very limited without mixed methodology kind of an approach. Yeah. Well, I did wanna mention that I've sort of focused on the big data quantitative side, but you can also, you know, make your smartphone apps as qualitative and interviewee and interactive as you want. You won't have the same face-to-face participant observer status. But if you wanna do like context triggered interviews where you, you know, you wanna like track 30 people at a workplace as a communication researcher, sociologist and some event happens and the smartphone triggers some response and then you wanna talk to the person, video Skype, right? You could do that. And it extends your reach qualitatively, not just quantitatively. Yeah, Steve. I guess my question is something different way because I've read a lot of these discussions and they all seem to be out of the nation. Like, I can imagine 1984 happening well, no, I don't think it will and here's the worst case, let's play it out. And so it's all kind of science fiction and a competitive scenario visualization. But is there, I guess my question is, is there somebody to make it empirical in the following sense? Namely, you specify a number of bad outcomes that are independent of the technology that can gain the information. How many people are fired from a job or criticizing a superior? How many, your affairs are uncovered assuming that that's a bad outcome? How many people are put in jail because it's just speech? Whatever your bad social outcomes are. And then look at changes in technology, maybe over time or over space. Just answer the question, forget about what the bad things you can imagine. Have technological advances made people worse off by this criterion or better? And so here's just one example. In England, starting about 10 years ago, pretty much every public space was with some one or dog posts on the TV. How much worse did that make life in England in terms of getting arrested for matters that we would like to hold private? Or was it just completely irrelevant? Because there weren't enough people to watch the videos or there were medical protections or whatever. Or just gentlemen don't meet each other's males or gentlemen don't look too carefully at videos unless they have a good prior reason. Just empirically, do we have any data on this and would it be a good idea to actually put our speculations to an empirical test instead of just letting our imagination be allowed? Yes. I know we need to wrap up now. So I'm gonna let you have the last word. If you're going to do a study like that, it's also useful to look at what's called the chilling effect of the sense of surveillance. So there's both the actual people who are caught because of the surveillance technologies, but a lot of it is understanding the social implication of what happens when people feel they're under surveillance. Right, but I'm just saying that a big part of it isn't necessarily being arrested, it's the overall change. So anyway, I would like to say thank you. I think there's still a number of people who have questions. We need to kind of vacate this room, but what I would suggest is one, continue this online, continue it on Twitter, continue it via email, continue it in the kitchen, and thank you very much, Jeffrey. Thank you. Thank you. Thank you.