 of Future Tense. Future Tense is a partnership of New America, Slate, and Arizona State University, and our goal is to explore emerging technologies and their implications for public policy and for society. We do this in two ways. We have a channel on Slate, where we add daily commentary and news about technology, and we like to get together in real life with live events here in Washington and in New York. We actually have two upcoming events that you might be interested in. On October 5th, we'll be discussing World Without Mind, the existential threat of big tech, which is by Franklin Four. That'll start at 6 p.m. And on Tuesday, October 17th, we'll have a conversation with journalist Liza Mundy about her new book, Code Girls, which explores the secret history of female code breakers during World War II. A couple more quick housekeeping notes. Please silence your cell phones. That was in my notes before that cell phone for the record. Also, this event is streaming live on the New America website, so when we get to the Q and A, please wait until the microphone comes to you to ask a question. Otherwise, people watching online won't be able to hear it. So we're here today to talk about mental health and tech. And like many of you, I'm sure, finding ways to improve our mental health care system is a very personal issue for me. My mother suffered through much of her adult life from poorly controlled bipolar disorder. And for a very long time, over the course of the years, I developed a kind of kluge system to try to monitor her mental health myself. So I looked for emails sent at strange hours. I looked for rambling, run-on sentences, and weird punctuation. I took note of her voicemails. Was she sort of quick and breezy, or did she ramble on for three minutes about something I couldn't quite follow? But maybe the best indicator for me of her mental health was Netflix. So I could tell if she was watching a lot of Alien and Angel and plane crash documentaries that we might be in for a bumpy ride. If she repeatedly starred a bunch of shows and movies in a row and couldn't commit to something, I knew she was having difficulty focusing. And so those little cues would help me kind of keep on top of things. And if I hadn't heard from her in a couple of days, I would check Netflix. And if I could see that she'd been watching something, I knew she was alive, which was really useful. So my monitoring might have been klugey and uneven, but it wasn't completely out of line. Today we're here to talk about using technology just for that to help monitor as well as diagnose and treat mental illness. We're such creatures of habit, particularly when it comes to our personal devices. They can monitor, they notice maybe when the train is just barely starting to go off the rails. Your phone's GPS sensor can tell us whether you've left the house recently. Changes to text patterns could indicate depression or anxiety or even psychosis. And that's just passive data collection. Researchers like those here today are also using virtual reality to help treat people with PTSD and with phobias. Technology also offers patients new ways to stay in touch with each other and to offer each other peer support, which is crucial for people who might be inclined to retreat into themselves. And while this is all very exciting, it's also, as many of you know, extremely preliminary. And particularly when it comes to data collection, there are enormous privacy concerns. So today we'll discuss what is actually possible when it comes to using tech to treat and diagnose mental health problems. So let's get started with our first conversation. Speakers, when I say your name, please join me on the stage. First we have David Dobbs, who is an award-winning journalist whom I've been editing for about 10 years now, who writes features and essays on blindness, transplants, neuroscience, and mental health. He's written for National Geographic, the New York Times, and for Slate, and recently wrote about mental health and tech for the Atlantic and for Pacific Standard, a piece that comes out on Tuesday, I believe. John Torres is the co-director of the Digital Psychiatry Program at Beth Israel Deaconess Medical Center, where he's also a staff psychiatrist and clinical informatics fellow. He serves as editor-in-chief for the leading academic journal on tech and mental health and leads the APA's workgroup on the evaluation of smartphone apps. And then we have Steven Chan, who is UC San Francisco's inaugural clinical informatics fellow and an actively practicing physician in psychiatry and behavioral sciences, specializing in mood, anxiety, PTSD, attention, and psychotic disorders. He also researches in the areas of digital mental health and applications for cultural psychiatry and underserved minority health. He also designed and developed interactive voice interfaces at Microsoft, which is pretty cool. So I think to start off, John, I was wondering if you could maybe give us kind of a lay of the land. So what are the different sorts of technologies we're talking about here? So I think it's wonderful to be here. And I said to all of you online, hello. I said, I think mental health technology is a really exciting space right now. There's so much happening and there's so much potential. And I think it's always interesting to say what's possible, where are we now, and how are we getting there? So I think without ranting for a couple of days and going on and on, I think we're at a point now where a lot of people have smartphones, wearables, different sensors. I said, not everyone. Some people are still excluded. They may not have technology, but we're getting to a point where as a society, people have access to these phones and sensors. And because of that, we can collect a lot of information very easily now. I said, we can send surveys about mental health on phones. The phones by virtue of being phones can collect a lot of information about where we are, who we call, how we use them, who are nearby. I'm not gonna talk about virtual reality because we have an expert speaker who's gonna talk about all those, but we can certainly do a lot of things that may augment or improve mental health. And I think that we're at the early stage that we're now collecting all this digital data. We're learning what it means, and we're saying, here's what we've been doing before. It may be working well or not, but we have all this new data, all these new things. How do they really work? What works well? What doesn't work well? What's safe? What's effective? What doesn't work well? And I think I'll just say, just because we can collect all this digital data doesn't inherently mean it's gonna be valuable in itself. It has to be proven. Just like we can collect an entire genetic code today, you can know every gene in my sequence, and it doesn't mean that you know who I am, it doesn't mean you know my personality, it doesn't mean you know my predilection to mental illness. Having information is a great first start, but now we have to do our due diligence and kind of say, what does it mean? How should we use it? And what are the best ways? So that's my fast summary. What different kinds of technologies are we talking about here? So the apps to diagnose, yeah, can you tell us a little bit about that? So at least the way I try to break it up is we can kind of make two artificial buckets, but apps are working more into diagnostic and monitoring side, and apps and technology working more into therapeutic and intervention side. If we're looking at the diagnostic and monitoring side, thinking of just the smartphones, you can almost think of three use cases. The first could be we're sending surveys, so active data. You have to actively respond to that survey. If you don't take that survey, there's no response, and that requires participation, and we can talk about later. If you send me a survey every day, I'm gonna stop taking it eventually, but that's just, maybe that's just me, maybe that's human nature. We can also collect on this diagnostic monitoring bucket passive data, and by passive data we mean we can collect sensors from the phone, so we can kind of, the phone by virtue of being a phone knows where you are. By sending a text message, it knows that you were sending a message to someone. It may know how many characters it was. It may know that I'm calling the same person 20 times a day. It may know I'm not calling my mother enough. It may know I'm calling her too much. I won't disclose that. So there's a lot of sensor information, the phone being a phone. There may be this kind of idea of a digital phenotyping, a term that J. Pionella coined, of kind of these digital biomarkers, these digital signals, digital fingerprints, of what could be certainly happening. I said again, just because we can collect those digital fingerprints, doesn't mean that they're inherently going to be useful. There's a lot of work to kind of refine them and say what they could do. We can certainly talk about now in the therapeutic bucket, even just as I'm looking at smartphone apps, there's a lot of different things that technology can do from helping scheduling appointments, to medication reminders, to offering augmented forms of therapy, offering cognitive behavioral therapy on a phone. You could certainly offer mindfulness. I would argue if your smartphone is buzzing, telling you it's time to be mindful, get your phone out, just be mindful right now. You missed your mindfulness, but that may be counterproductive, but it may work for some people. And I think that's part of the question is, who do we target these things to? Who's going to respond best to a technology intervention? Who is it going to be not useful for? I think overarching both buckets of diagnostic and monitoring, there's a question too of who's seeing your data? You're giving up a lot of personal data about yourself. You're saying I may have a mental illness, I may take this medication, I may be in this location, this is where I sleep, this is where I work, these are my friends, this is my social network. And it's certainly when you go to a psychiatrist, a therapist, a mental health office, you're protected. There's HIPAA, there's federal privacy laws that help protect your information. And I said a lot of times these digital things you may be using don't offer those same protections. It looks like healthcare, but it's actually in a different bucket. There's less federal regulations, there's less protection. So in using these services, what are you giving up? And sometimes I tell patients or the price of a free app may be your personal mental health information. And that's not a judgment whether you should use it or not, that you should at least be aware of what is the contract you're signing, you should be able to make an informed decision about it. So again, there's a lot of great stuff out there, what you're giving up to use it is certainly unknown. And one project with the American Psychiatric Association that Steve Chan and I do is we've been working on what's a rubric evaluation to help you make an informed decision about a smartphone app. So we're not gonna tell you, this app is good, this app is bad, because it depends on the situation. And even if Steve and I could rank every app out there together, if we spent a year looking at all 10,000 apps, by the time we finished, they'd all update. And everything that we ranked for you guys would not be useful. And of just like there's no A plus therapy, you don't say therapy X is the one to do, or you don't say medication Y is the one to do, or exercise, plan Z is the best one, these are personal decisions. So it's very hard to kind of say, this app is the app to do, it's dynamic, it's changing, it needs to be personalized. So with the American Psychiatric Association we've been working to help people think, what are some questions you should ask when you're looking at a new technology that can help you make informed decision? I said, so we're not gonna say again, use this or not, but just to be aware of, what is the privacy and safety? What is the efficacy? What is the evidence behind it? Is it usable? Is it something you can stick with? Is there interoperability? Will data get back to the right people, or are you actually fragmenting your care because you're telling one app what your medicines are, one app is doing your therapy, one app is doing appointment reminders, and no one has any idea where your treatment, where your healthcare is heading. So. I really wanna get back to the question about privacy and security, because I think that's an important one. But I'm also glad that you mentioned how personal this is, and how things that are designed for one person might not work for somebody else. And Steve, I know you've been involved with thinking about how these apps are designed and bringing patients into the process. Could you talk a little bit about that? Yeah, so one of the things that's really, really interesting about having technology in mental health is that it really increases access to so many people. When I trained in Sacramento, Sacramento is the capital of California, and they had all sorts of ethnic, ethnic variety of folks. I worked at a Pan-Asian clinic, where we had so many different languages, Hmong, Vietnamese, Laotian, Cambodian, Chinese, et cetera. And what we noticed was that just having the ability to access mental health has been a challenge for them. We'd have to manually drive out to take folks out to outings, take folks out to their appointments. And so designing interventions that works with not just their ability to use smartphones, but also their culture and their language and understanding those aspects is important. So part of the American Psychiatric Association criteria is how usable are these technologies? Because as John alluded to, you can have surveys and bug people with surveys, but then they won't wanna come back and use those applications. Similar thing with virtual reality is my understanding is that if it causes some dizziness or vertigo as well, then people are less likely to use it. But if it's usable and it's easy to use and someone is helping them and coaching them to use the technology, then they're more likely to accept it. So having the usability piece is absolutely important for these apps and devices. Yeah, I think I read recently somewhere that India has 1.3 billion people and 5,000 psychiatrists. We have 25,000 psychiatrists in the US. So you can see how, as Tom Insel says in a piece that David wrote for The Atlantic recently, we can't get as many psychiatrists as we need. So I mean, I guess, and I'm interested in what any of you think about this. How can you scale up what so far seems to be very small efforts? Yeah, you mentioned India, specifically the shortage of mental health professionals. And we see this in the developing world. There was an article in health affairs a few years ago that said that the ratio of smartphones to mental health professionals is quite high. And so could we use those smartphones as a way to deliver therapeutics? Another thing to consider, and I will just leave the audience with this thought, is that if you look at any sort of mental health professional, they do a whole variety of different tasks, such as not just scheduling someone, but interviewing them, diagnosing, assessing, then treating. Can you split those tasks up and have other folks do different pieces, such as the education part? Could you say use an application or some sort of website that delivers that education for them or gives them homework too? So splitting that up can potentially make use, more efficient use of a limited resource, which is the mental health professional. That's been your impression with Dr. Insel's work? Yeah, so Tom Insel, I wrote a profile for him for the Atlantic in the July issue. And he was for 13 years the head of the National Institute of Mental Health and got restless partly because the work they did there, there was such a long lag and uncertainty, whether the work they would do there would actually help patients. He gave a talk once and someone in the room got impatient and said, you don't get it. Our house is on fire and you're talking about the chemistry of the paint. And so that helped inspire him to move to Google temporarily where he headed the mental health team at Verily, which is their health unit. And started to create, plan out and start to build out a mental health effort there based on collecting smartphone technology and then trying to figure out how they might use that to help patients. So one model he used was he actually, this was started by a woman named Daniel Slasher at UCSF now at Verily and it was a program, an app called Prime and it was for people who had had first psychotic episodes. And there were about 50 people in the group and at any given time, two or three, sometimes more professionals, not all of them therapists, but different levels. And the idea was that this group would give these people a safe place, private but open if you will, where they could share with peers. And I think one thing that's easy to overlook the potential of here is the power of the smartphone simply to connect people who are suffering from a given mental problem or challenge together so that they can compare notes. This is a tremendous value to people who have had psychosis because the first thing that happens when you become psychotic is your entire social network falls apart in about 10 minutes. So I saw this at work there and it worked in a very interesting way. They had one example where a guy who wouldn't have spoken up ordinarily, he said, spoke up in this group because it was safe. He was feeling a little different. They ran some algorithms to look at whether his syntax in his text were changing. They were and they upped his medication and he stabilized. Now, no control, maybe he would have stabilized anyway, but this is one mile of how it would work. Fairly low labor cost from the psychiatrist there. He didn't need a psychiatrist to do all that. He kind of brought it himself. And another instance where I've seen, talked to someone where this worked was a woman named Neve Jones who is in the story that's coming out next week that I wrote for Pacific Standard who was a grad student in philosophy when she had her first psychotic episode. One thing after another was bungled and she went to hell and came back out. But part of what got her back out was with a simple Google group on her own, she built a network of other academics who had had psychotic episodes. And some were still, some were undergrad, some were grad students, postdocs, and some were professors. And simply by connecting to other people who were doing what she wanted to do, this was a tremendous boost to her and she credits it along with a really good therapist and a good psychiatrist with helping her get out of them well. I think this points to the idea that these are packages, these are kind of broader interventions than just technology or certainly technology in this case is a useful mediator but even in prime that they developed, if you look at people supporting it, I said you can't just kind of buy, the app itself is just an app, it's a piece of code or technology. It's the fact that there's people behind it, it's connecting people, that's what's different. I think so in some ways you say, well is this gonna be cost effective? It's hard to know because what if we need now more people to support this, to monitor it? So I said it's a question, are we just kind of, we know that human connection one-on-one talking, these are helpful things for mental health, for all of health. And are we using technology in this case to kind of just facilitate connections happening which is a good thing? Is that gonna be a scalable solution that's gonna be cost effective that we can deploy globally or is this gonna be, we have another layer to kind of connect people to? I think these are questions we're learning, we don't have the answer to. And it barely, they see it very much that way, that this is a small in experiment to a feasibility thing. And before everyone or anyone rushes off to try to sign up for prime, I should say it's not even beta, it's alpha, it's a research project. So it's not open as it were, it's an experiment. Yeah, I'm really interested in the fact that Google and I think in your piece you say that five to 10 companies in Silicon Valley are involved with this space in some way. At least that and probably more since I closed the story. Right, so I mean, is that something that we should be sort of excited about or is it something we should maybe be a little trepidatious about? We've seen Silicon Valley likes to move fast and break things, but when things are people's minds that can be a little bit scary. I think all the above, my take on this haven't written about it from several different angles and been around the mental health world, actually all my life as my mother was a psychiatrist, no jokes please. You're in a safe space here. Yeah, there's tremendous potential here for good and tremendous potential to really screw things up because we've talked about some of the good this can do. It's got huge challenges. How do you, let's say prime was perfect. Still how do you scale it up? It's 50 patients, half a dozen people at least. Running it, how do you scale it up? But there's the sort of bomb we can't overlook here is the privacy issue, which I wrote a piece for Slate that was out, what, Tuesday. That was only three days, two days ago. About how bad the firewalls around our medical information are. There were 112 million patient records leaked last year by insurance companies, breaches, a lot of them hacks, trying to get the data to do as ransom, to ransom them or other means. And about an equal number in 2016 and 2015, over 100 million, this is an Equifax scale leak of medical information occurring every year already. So, and this is from places that some of them, the health insurance companies that are ruled by HIPAA. This new sector, it's unclear where HIPAA applies and where it doesn't. John did a wonderful study just of dementia apps, hardly any privacy strictures on those. And it seems to me, this stuff is so explosive. There can't be, I mean, it could cripple the thing before it got started and it would be irresponsible not to have it ready. So, what our team did is with Ipset, Bihia, and Lisa Rosenfeld is we actually took all of the top dementia apps on the commercial market places and we printed out those privacy policies. And we actually read all the privacy policies to see what information are people giving up their rights to when they're using apps that are collecting personal health information about dementia. And you can imagine someone with dementia may have a difficult time in the first place understanding a privacy policy, but in some ways it didn't matter because over half the apps had no privacy policy. So, the companies that were making these apps didn't even bother to put out a signpost and say, here's how we're using your data. Of the less than half that even offered a privacy policy to tell you what's happening to your personal, your dementia data, most of them didn't offer very protections that people would wanna know. And you almost say, if we can't put out good signposts that tell people what's happening to your data, that's not a good place to start. I said healthcare, especially mental health is based on transparency and trust. You can tell and disclose very sensitive things that you wouldn't tell to other people. If we're now offering it a world where we're trying to hide the fact that information's being collected about you, we're not even gonna tell you what's gonna happen to your data. That's not the foundation to build us on. And I said, if we do build us on a foundation of not telling you or collecting your health information, marketing it, selling it, people aren't gonna be honest to these devices. People aren't gonna use them how we want and we're not gonna, it's not gonna be helpful. So I think we're at a very, very early stage in terms of that. And I think we'll know that it feels really maturing when we see that we're having honest conversations about what information about you are we collecting? Where is it going? Who is it seeing? How do you have access to your own information? It shouldn't be that someone's collecting it and selling it and making money because you have a mental illness. So to what extent are apps in this space actually regulated by the FDA right now? Oh, nothing. That's a very good question. Right now with these apps, the FDA is being overwhelmed with a lot of these apps. They've only been able to approve a handful of apps. In fact, I think only one app in the mental health space for substance use disorders has been approved in just a few weeks ago. Reset? I'm sorry? Reset? Reset, I think by paratherapeutics. But since then they've decided that they'll instead of certifying individual apps, which we mentioned that these apps can change at any moment. It's not like a book where we can write a book and then publish it and it's frozen in time. If we review an app, it can continuously change and then all these different permissions, the walls are quite porous. So they instead are certifying the developers and saying, well, are the developers practicing things in a way that's sound and clinically helpful? And I think that their mentality is that with these apps that they're low risk, they're not diagnosing something that they deem as acute or critical, like an EKG would by the same time, mental health. I mean, John, you've seen some of these studies where apps would recommend things like, oh, if you're anxious, you should grab a bottle of alcohol. All right? I think we've seen stuff like that too. Sometimes too, there's unintended consequences. We're using new technologies, new platforms to reach people and one study of very good intentions was to help cut down on college drinking. There was an app and they would have students say, how much they drank and the app would then say, you know, you've reached your limit for today, you should cut back and that makes sense. And then they went back and said, what's happening? What's actually happening that we released this app on college? And they said, people are drinking more. And they said, but the whole point was to tell people you've reached your limit. And they found that people had turned the app into a game. And the idea is who would get the highest score on campus for the most. So again, a wonderful idea for using technology. In that case, it didn't work out. But certainly having a wonderful idea is the first step and we should explore them. And this was in a study. They were able to stop it very quickly. But I said, certainly before we're rolling these things out on a national global population level, just because it sounds good, it's a good idea. We have to say what's going to happen? What are the unintended consequences? So in a minute, we'll move on to audience questions. But I wanted to ask one more thing first. What is a digital placebo and what does it have to do with what we're talking about here? So a digital placebo is one of my favorite terms that no one else in the world seems to love as much as I do. So we wrote about the term in Lancet psychiatry a couple of years ago. And the idea is, especially in all of health care, especially in mental health, behavioral health, there's a placebo effect. We realize that sometimes people get better because of the expectation of care and the expectation of treatment. And certainly when you are given an app, there's expectation. You have a high tech intervention, high tech monitoring, sensors, big data, machine learning. These are things that we have to go about in the recent article with Patrick Staples talking about this. But the idea being that there's expectations. And sometimes the fact that you think you're using an app and it makes you feel better. And we actually published a meta-analysis of all the depression apps in world psychiatry with Joe Farathit's free to access online. And we looked at, if you look at all the apps that work on depressive symptoms, you can actually see that if you ones that have a randomized control trial, so ones that have a wait list where you just get nothing and ones that have an active control, there's a difference. And by that I mean that the fact that if you have an active control you feel better compared to if you have nothing. So the fact that you may have an app that's doing something, if you have a placebo app, you actually improve too, your depressive symptoms get better. So I think it means that anytime you're kind of seeing improvements in these studies or an app says we get you better, is it just a fact that there's expectations? And that's not a bad thing, but that means we really have to demand a high standard of evidence. It shouldn't just be that it made you a little bit better. It's how did it make you better compared to something else? How do you design a placebo app? You can do things like, just basically provide some sort of education or static text or something that isn't really that interactive. But placebo apps, it's bare basics, right? Bare bones sort of things versus an interactive, highly interactive intervention that you can create with an app. But I think the key point is making sure that when you're comparing things, you're not just comparing the app to nothing at all. So that's something to watch out for with these studies and these research studies. Something is always better than nothing, except that alcohol app in that case. Nothing was better. And I should mention too that as I said, so David and then John with a co-author Patrick Staples, both wrote pieces for Future Tense in the past week about mental health and tech. And if you're interested in reading them, you can find them at Slate.com slash Future Tense. Highly recommend. All right, I think it's time to move on to questions. Hi, my name is Lucia Savage and not only am I a privacy expert, but my mother was also bipolar. So I had two questions for you. I'll just try to do them really quick. One is last year HHS published kind of a definitive report for Congress on what's regulated by HIPAA and what's not regulated by HIPAA. And in this mental health space, that's particularly acute because there's this interesting interplay between state law and federal law. And I guess I'm wondering for the panelists, do you even know that that's out there? Because the government doesn't really have an advertising budget. So never heard of it. Anyone heard of it? You've heard of it. Yep, so we've read it and it's actually informed the work we did at the American Psychiatric Association because we kind of were able at that report outline all the different ways that consumers or people looking at these apps may be kind of, let's say not deceit, but may kind of be giving up information. They don't realize it from marketing practices, how information is stored. So it's actually freely available online as well too. It's a white paper, but certainly a very good read. And I said, we'll be talking about it. Actually, we have a session coming up at the APA. We have a meeting called Institute Psychiatric Services, IPS. And we have a workshop, Steve and I are doing, on smartphone app evaluation. That will be one of the slides, a picture of that. The second comment was for you, AMA has this new consortia, Exertia, where they're evaluating the efficacy of apps and there's kind of a little bit of a private and security piece. Is the APA work integrated with that at all? Or is it sort of parallel? So right now it's more parallel. I said, I think we actually have recombination. We actually have guidelines and questions that people can follow. I think the AMA work is a little more abstract right now. You can't actually bring something and kind of begin to say, there's no question to ask. So I think they're in an earlier stage. And a variety of associations and groups are working on these similar criteria. So the Anxiety and Depression Association of America, the ADAA. And there's also a coalition of technology in behavioral science, CTIBS. They also have their own criteria. So a variety of organizations are also looking at things in parallel with different philosophies. For the APA it's primarily for clinicians, so we can say, for clinicians, hey, here's some guidelines you can use. Some associations are doing things a bit more comprehensively, very rigorously. Some folks are doing things a bit more like a consumer reports style review. I'm high, thank you very much. I'm Kara Smyrick. And my question is about the inclusion of people with lived experience of mental health conditions as part of teams that create the APS. So what I've heard a lot and read a lot is that the APS are created by providers or others and then pushed out to the consumer world but there's no involvement of the actual consumer, the end user in the APS. So that's one question because your panel doesn't happen to have anybody who identified as a person with lived experience but thank you for bringing up Neve Jones. So that's one of my questions. And then the other question was about how to educate both consumers and families in the public about those privacy issues and where to look for those privacy issues so that when they're signing up for these great technologies or these technologies, maybe not so great, how do they know exactly to look for that like first one thing is the privacy thing? You know the involvement of people who are actually going to use these APS is actually critical. And I know that there's some groups that are already creating these programs, the National Alliance for Mental Oneness, for instance, they have a variety of, I know they have at least one app but using some of those principles from Prime that they actually would create it, they have an app called Air where they have like a mini social network and you can actually post anonymous messages and get virtual hugs and not just the likes that you'd usually get in Facebook. And I believe the Depression and Bipolar Support Alliance has also created APS as well but absolutely essential, I think, too, that we actually use, we actually try out what we create. I think the point you raised, Paris, is really one of the core issues because right now people don't stick with these apps. I said people may download it but just because you download an app doesn't mean you're going to use it more than once. Just because you use it at once doesn't mean you're going to come back to it five times, 10 times. We want these things to be engaging. I think the lack of people with lived experience is really made that these things aren't as useful. I'm lucky that I'm based out of Beth Israel Deaconess Medical Center so when we're building our apps that we're using in research we actually can work with the people that we treat and I said they're actually some of our harshest critics so I'll show a screen of the app that we're making and so people I work with will say I would never use that. They said completely redesign it or make it this and we've learned a lot by that process and I think as we get the voice of more people with lived experience and these things will become more usable and more effective but it's a huge barrier and it's a huge problem right now that you bring up. David, do you have thoughts? Yeah, I think this is, was it Kara or Karen? Karen. Keras, okay, Keras, it's a good question. This woman I profiled for Pacific Standard is this is a huge thing for her and for most people with lived experience which for those who are familiar with the term is someone who's dealt with a particular little challenge, middle illness and she argues very energetically and effectively that you know, yeah, she's a force. She argues very energetically that input from, not input from active full partner participation by people of lived experience needs to be the de facto state of all research and treatment efforts and having seen the effect that she has on the efforts, many efforts she's already been involved with in her four years of work or so I can't help but be convinced she's got a real point. So this speaks to something that really needs to be more systemic but as John was just saying, you can see the effect very quickly if you involve someone and you're designing an app it's not gonna work for them, they'll let you know and it's good to know this thing. I wonder how many of those of the Silicon Valley efforts are doing this. I'm wondering how do you build awareness and acceptance and credibility for an app and is an MD going to prescribe an app or is there another way that a patient discovers that this is available to them? Oh, go ahead. Oh, so my initial thought was it's kind of like how we would recommend say going to a particular group so if someone had an issue of addiction I'd say, no, check out Alcoholics Anonymous or if you don't like Alcoholics Anonymous maybe we can find something that would suit you. Maybe there's a book that, so this kind of may perhaps shows my age but here's a book that could be helpful that you can go on Amazon and read and I know, I know, I know and we do this. We'd say, oh, here's a book so we'd also just try to explore different apps that this is what I would do and say, would this work for you or what kind of smartphone do you use and do you know how to use it on a daily basis? I had a family, gosh, in an inpatient setting and they were saying how will we know our grandmother or grandfather would have a relapse and for their illness and I would say things like, well, check their sleep, check their appetite, if you notice any changes and they say, and then they told me well would a fit it work and I said, well, it's not medical grade but it's helpful, it's one data point. So it's part of the whole package I think in terms of what we can suggest and recommend. I think the first starting point is just if these apps want to work as medical treatments, medical interventions, they again should respect people's privacy and if you have half the things that don't have privacy policies that's really a non-starter or that's taking us down a different route and I said, so again the foundation is health is trust and transparency. I think anything that kind of differentiates itself saying, here's how we enforce this, here's why you should trust us, here's how we use your data is a really good start and I said, you can probably get rid of half the apps out there if you don't even have a privacy policy. One question, you've got rid of half the space. And if no one actually reads the privacy policy but though at least it suggests that the company has put thought into it, right? Is that part of it? So I said they're not, just because they have one doesn't mean it's good. But that's a screening question only if they can get rid of half of them that haven't even bothered to tell you. Right. Yeah, and part of it would be literacy too. Oh, it's kind of like when we assess for services like if I said again of the AA example, oh, you should go to an AA meeting if they start telling me things like, I can't drive there. I can't get transport or take time off. Then we know that's not gonna work. If someone says to us, well, I wanna use an app but I don't know how the permissions system works. I don't know how to disable access to my contacts. Then I'm a lot more careful about that sort of thing. I think we have- The question was how do you spread the word? Right. Well, I was just gonna say, I think one way would, there's nothing better than hearing from a friend like you that something is great. So if there can be more ways to get people with lived experience to connect with each other in a society that tends to isolate them horribly and the sicker they are, the more they get isolated, that would be a win right there. Unfortunately, I think we have to stop it there but thank you so much to our panelists for joining us. Thank you so much. Thank you. So next up, we have a presentation called virtual reality, real healing from Albert Skip Rizzo. Skip is the director of Medical Virtual Reality at the University of Southern California Institute for Creative Technologies and has researched professor appointments with the USC Department of Psychiatry and Behavioral Sciences. He conducts research on the design development and evaluation of VR systems for clinical assessment, treatment and rehabilitation. Looking, oh, okay. Before our question, how many people have actually tried virtual reality immersive VR? Oh, okay, about half the room. Well, you're in for a treat, the other half. I brought along a very simple VR headset that runs off a mobile phone called the Samsung Gear VR. I'm not getting paid by Samsung to mention that. And I'm gonna pass it around and you can pass it back and forth. And basically what you have is a spherical video of hanging from a helicopter flying over Iceland, just to give you an idea about the immersive properties of VR. So the way this works, you put it up to your head and it'll sense that your head is in there. It'll start up, but you'll see a still image. You'll be able to turn your head and this little touchpad, you just tap it. So you go like this and now I have the still image I can look around. But if I go like this, now I'm in a virtual spherical environment hanging from the helicopter. If you get to the end, you'll see a little curved arrow. You align, there's a crosshair. You just point at that and it'll bring it back to the beginning. So here you go. All right, so with that, do we have slides? Here we go. Here we go. So I'm gonna talk a little bit. I've got 15 minutes, I'm gonna talk fast and I've got a lot of video. I'm gonna talk about how VR has been applied in the areas of mental health and rehabilitation. Now this is not a new thing. Everybody's excited about VR because the technology is basically caught up with the vision recently, great tech advances. But actually, there's a ton of work going on since the mid-90s. VR had a period from 91 to 95, I would say, where a lot of excitement, just like today, VR is gonna change the world and all that. But the vision was a little ahead of its time. The technology was not mature to deliver on that. But there were some things that you could do. And so around 1995, basically the bottom fell out of VR. It was viewed as a failed technology. But people in mental health and rehabilitation kinda hung in there. And consequently, over the years, the technology got better and concomitantly, the research evolved. So probably of any application area for virtual reality, mental health and rehabilitation has the most evolved scientific literature. So with that said, this is where I'm located at University of Southern California. And our lab, the Med-VR lab, interdisciplinary group of folks. And over the years, what we've done is develop applications in site, cognitive, motor and virtual humans. I'll show a couple of clips here. Come on. Oh, there we go. I'm gonna be back that up. Having a little bit of a technical slowdown here. So this is a simulation that you would see in a headset like this of Iraq or Afghanistan, that we've developed over the years for treating post-traumatic stress disorder in veterans coming back from these conflicts. Basically helping patients to confront and process difficult emotional memories in a safe environment. For a point of comparison, oh boy. That's what it looked like about seven or eight years ago. You can see the advancements in the graphics. It's gotten much better. But in this case, we're not just treating PTSD with the simulation. We converted it to a cognitive test. So in a military relevant environment, maybe after a mild blast injury, you could assess a person's attention, memory, executive function within a simulated environment. So as primitive as it may look, it was useful for that. But our work doesn't just involve military ops. This is one for mental rotation, assessment and training. My background is clinical psych and neuro psych. And so we wanted to develop, this is 1997, ways to be able to test and train people's mental cognitive visual spatial abilities using a non immersive but highly interactive VR simulation. Under here, what you have is similar to wearing that headset. That's what a child sees as they look around in a virtual classroom. This was developed to test kids with attention deficit hyperactivity disorder. So it's an assessment tool where the child would have to pay attention to what's going on on the blackboard. But meanwhile, distractions, teacher going to answer the door, kid in the back through a paper airplane, school bus going by. So a controlled stimulus environment for assessing attention under a range of cognitively challenging conditions. Next one is in physical therapy, using a Microsoft connect. You can capture that user's movement. She's not wearing any markers or anything like that. The cameras capturing the movement, putting it on that little avatar. And now she can interact in a game based environment as a way to make the very boring and repetitive activities of physical or occupational therapy, say after a stroke or a brain injury, more fun and engaging. And this is all off the shelf consumer products. So it's not just a domain of research labs. This can be put into the home so you can push more of this therapy into a home environment. We get people to do it at sufficient levels. We know that doing physical rehab matters. Problem is people don't do enough of it to get the gains. So now taking that same technology and using it for a different purpose, the cerebral policy foundation asked us to make computer games more accessible to children with severe motor impairment. So this little girl, first time she's ever played a video game in her life because she didn't have the motor capability to operate a game pad or a keyboard or whatever. And the only movement that she had real volitional control was this movement. She could do this reliably. So the camera is tracking that movement and emulating the action of a game pad so she can play this little shark jump game. We've now evolved this so we can do things with say the full on Xbox. So say a racing game, a car racing game. Say you're in a wheelchair. If you can shift just a little bit in that chair, you can steer the car, you lean forward, you can go fast, you lean back, you can slow down. Now moving into the virtual human area, come on. This is a virtual patient for training clinicians. Can we turn up the volume a little bit? Well, my wife told me she thought I should talk to someone. She's been pretty concerned about me since a soldier suicide on a base last week. Did you happen to know the soldier? Yes, he wasn't a friend, but I met the Marine once or twice. He seemed normal at the time. I guess I'm afraid I might end up like him. Do you have any plans to hurt yourself? No, it certainly caused my mind, especially lately. I just needed it all to stop. Sometimes I can't handle it. So what you're seeing there. Well, I'm not a real person. Hold on, it back came up. Is that the USC School of Social Work is a system for training novice clinicians how to conduct a clinical interview and in this case with a possible suicide assessment. And I always like to say it gives novice clinicians a chance to screw up a bunch with a virtual patient before they get their hands on a live one. So we're doing this now with medical students with a wide range of patients. We've got a whole toolkit for this. This guy uses that same kind of AI technology but was built actually in 2011 for the military as a healthcare support agent. Service members coming back from Iraq, Afghanistan. They don't want to see a shrink. It's like I don't have any problems. But they're having issues. So they could go online and interact with this guy, the SIM coach and ask questions about PTSD in a conversational mode. Get a little bit of advice. Do some light screening assessment and get advice. Maybe at the end of a conversation character might say, hey, it looks like you're having some problems. If you want, I can pop up a website on the side. You can punch in your zip code and a list of providers in your area will pop up. So I'll just let them introduce themselves real quick. I'm not a real person if that's what you're asking. But I'm based on the personality and experiences of real soldiers and Marines. I'm still just a piece of software but I'm getting better all the time. So hopefully I can be a helpful piece of software to talk to. Now this is going beyond, I've got five minutes left. This is going beyond just military now. We're doing it with the USC Counseling Center. We've got a whole series of apps that leverage this kind of a software architecture. But generally, when you look at VR, going to the definition now, it's a collection of enabling technologies, whether it's a computer or the computer on a smartphone. Interface devices ways to track the user so you can interact and display technology, like what you're trying there or a big screen. All with the idea of building simulations that are controllable that you can put people in. I prefer the more human-centric definition just more natural way to interact with computers and extremely complex data. And if we look at the history of human-computer interaction, maybe it's time we went beyond just limiting ourselves to a mouse and a keyboard for an interface with the power we have. In mental health and rehab, this is the best metaphor I think. Aviation simulation. Just like an aircraft simulator would serve to test in, hold on, back that up. Just like an aircraft simulator would serve to test and train piloting ability, we can test, train, treat and teach human function in controlled stimulus environments, the ultimate skinner box. When we think about VR, think of the three eyes. Immersion, interactivity and imagination. You don't always need all three, but you need at least two. So for example, immersion. This is the fellow, Immers. You're seeing what he sees as he turns his head in a simulation of an Afghan marketplace as part of an exposure therapy approach. And this is one for physical therapy. Come on, there we go. A little sensor on the front of the headset capturing hand movement in real time and allowing that captured hand movement where you see in the stereo pair there to interact with graphic objects. And what we've done now is created a whole series of upper extremity finger therapy applications but in game-like contexts where you can see your hand in real time and you can interact with the game content and you can do bimanual coordination, a whole range of applications. So I'm gonna jump ahead here. You don't need to see the whole thing. Interactivity. You don't always need to be immersed. This is a balanced training activity. Imagine an elderly person with a safety harness practicing shifting right to left to drive that penguin down a ski slope and open source free game, by the way. Lean forward to go faster, lean back to slow down and do it in a compelling, fun way. It doesn't have to be a penguin. You can catch butterflies, you can do a bunch of stuff but the idea is she's in the lab. She's not immersed in a virtual world, not needed. And it doesn't always have to, come on, be on a big screen if you build something compelling and engaging. You can get people to do fun stuff, two minutes, okay. Finally, with interactivity, come on, here we go. You can interact with virtual humans from various levels of AI. This is an application we developed for the Dan Marino Foundation focusing on high functioning people on the autism spectrum for helping them practice job interview skills for getting a job. And we can set, whoop, that one ahead here, whoop. Sorry about that. You can set these six characters in six different job environments and they go through a job interview and you can set up at different levels of provocativeness. So here's this character, this job, soft touch. In a minute, we'll get into some questions about the job but before we do, why don't you just tell me about yourself? Now, we can take a different character and make it a little harder, different type of job setting. This is an entry level position. I guarantee there will be things you won't like about this job. That said, what's the most important thing you think you're looking for in a job? And she can get crankier. This is just for showtime consumption. And here's a little bit more on that. The different characters, the different backdrops and you'll see a user talk about their experience and I'll be just about done. And here we go. It's a good program and it teaches you how to do an interview and it teaches you how to be in an interview situation with another person. And did you see your performance improve? Did it get better? I get better every time I do it, I get better. So that's recent data from a study with this. So to conclude, in 1994, VR was used for exposure therapy for specific phobias, heights, fear of flying and so on. It was easy to do. You just have to make more progressively challenging environment, it's not much interaction. But this is a short list of the areas where VR has shown added value as a tool for assessment, a tool for intervention or treatment or tool for scientific study. And I think we have a bright future ahead. So thank you. Thank you so much. So we're gonna segue now to our second panel which is how computer science is reinventing psychiatry. So Skip will be joining us again. And then next we have Manman Chaudhary who is an assistant professor in the School of Interactive Computing at Georgia Tech where she leads the Social Dynamics and Well-Being Lab. Her research interests are in the area of computational social sciences and questions around making sense of human behavior as manifested via our online footprints. And then next we have Sarah Feinberg who is an instructor in the Yale Department of Psychiatry. She's a psychiatrist with clinical work in public psychiatry. Her patients are people who are underinsured or uninsured in the New Haven community. She's particularly interested in novel approaches to quantifying social learning as it occurs in interactions with her workforce focusing on borderline personality disorder. So this will be sort of about how not just how technology is influencing the way we treat mental health problems but even how we think about psychiatry. So Sarah, I'd love to start with you. What exactly is computational psychiatry? Different things to different people I would say. But one, I'll give you an example, maybe this will help. So we're gonna take now a slightly deeper dive into thinking about the problem of understanding mental health and responding to it. And one of the things that this will do is talk a little bit about the chemistry of the paint where we started this morning but also challenge us to move faster on understanding that issue so that we can make use of it and keep the house from burning down. So I'm interested in a condition called borderline personality disorder which is a sort of devastating disruption of social interaction which leads to a host of other symptoms but essentially I think is rooted in difficulties in interpersonal communication and perception which can lead to immense suffering for the afflicted person. And so it seems obvious that then we would study the way that someone interacts with someone else but most of the studies that have looked at social interaction in borderline personality disorder have shown a person a photograph, usually a black and white photograph of a face and asked what is this person feeling, what's going on with them? And I just feel so unsatisfied and I think most people with borderline and other mental health conditions feel so unsatisfied with this as a way of understanding interaction problems. So I've been trying and I and other people in this field have been trying to move toward thinking about how do we study interaction? So how do we study what happens when you sit down with someone you've never met and moreover and importantly what happens as you get to know that person and you need to decide how much to trust them and for what and when to change your mind about whether to continue trusting them or trusting them so much and we're able to use complex games that ask people to interact with other people in specific ways to really see what happens as someone has to go through that process but this requires a fair amount of technology to be able to interpret these, to set up the experiment, to interpret it and then my hope is eventually to move it into the real world, to ask what happens when you go out and you actually make friends or try to work or get along with the people at the doctor's office and these things are essential to having a full life. So that's the kind of thing I'm interested in but computational psychiatry really refers I think more broadly to the use of different kinds of technology neuroimaging technology, language analytics and also real world ecologic data to try to understand it almost. Munman, what's your preferred term for the sort of work that you're doing? I think that would be the closest one. I haven't settled on a term for it because I think there's just so many opportunities on this field but the way I would describe it is kind of bringing in a lot of the advancements that have been happening in the computer science fields which kind of have been in isolation to a lot of the other possible areas where they can really have an impact psychiatry being one of it and thinking about how we can have some of those bear on the challenges right starting from diagnosis to understanding to all the way to treatment and intervention and how at each of these steps we can think about bringing some of these advancements in computation whether in terms of new kinds of algorithms or whether in terms of new kinds of sources of data and how that can influence and help with resolving some of these outstanding challenges in psychiatry. So could you give us an example of the sort of projects that you've been working on? Absolutely. So my interest has been kind of thinking about people's online social interactions and I think about that in two different ways and we touched on that a little bit in the earlier panel is one way to think about it is as a source of data. So as a source of data it's a very powerful source especially for mental illnesses because the kinds of things that people say and do and the social interactions all of those are really valuable cues when it comes to understanding or diagnosing somebody's risk or whether they're feeling better or not. So a lot of our research has been kind of on the detection side thinking about how we can develop new kinds of algorithms that would help us kind of discover these kinds of cues about people that may relate to people's mental health status and how we can have them provide insights into people who are likely to be at risk going forward or are at risk and we want to help them or see how they're doing over a period of time. And the other way I think about these online social interactions is also as kind of what I call community interventions and one of the nicest things about these platforms is that it provides a mechanism for people to connect with other people and there are all kinds of unintended good consequences of those possibilities. One of it, a very good example are support groups. Online support groups are wonderful and especially when it comes to something like mental illness it can be really, really powerful because a lot of them provide a safe space that we also talked about earlier and also it being very stigmatized for individuals who are not comfortable talking about these issues in other settings find a way to express those thoughts and express those feelings and emotions to an audience who are very likely to understand them because they have either been through that personally or somebody that they know very close friend or family is experiencing that. So I kind of think of it both in terms of these online interactions as a way to understand these mental illnesses particularly in terms of their interactions and language and then also in terms of how they can be instrumented to provide support and interaction. And skip, your work is a little bit different from what Sarah and Munmun are working on but obviously there's a lot in common as well. I noticed in one of your bios you said that your goal with virtual reality is to go beyond what is available with traditional 20th century tools and methods. So what exactly are those limits that you've come up against and are trying to be? The first part of that quote was to drag psychology, kicking and screaming in the 21st century. But well, I think the goal here is to add a level of systematic control of stimulus parameters but to do that within a real world functional environment. So what you said about the photograph, that's exactly what drove me into VR. I was a practitioner in brain injury rehab back in the late 80s, early 90s and my clients were after a head injury trying to recover their memory attention so on working with workbook exercises, paper and pencil stuff, stuff that, you're asking somebody to do the thing they can't and do it in a boring fashion. And so that's what spurred on the idea about use game-based stuff but also put people in immersive environments. As well, what you were mentioning about reality mining, measuring people's interaction using advanced data mining of naturalistic behaviors. Well, this is stuff that we try to do but in a simulated context with a virtual human. We find that people interacting with a virtual character contrary to what you might expect sometimes feel more comfortable talking to pieces of software. We actually done controlled studies where you have a person go in and there's a clinical interviewer agent and you tell one group that it's just software. You're not gonna have a human in a loop. The other group is told there's a wizard of Oz control or driving the avatar in the other room in real time and you find people feel more comfortable with the AI agent. They talk longer, they answer more questions, they have less ratings of worry about impression management risk and they self-discose more information and reveal more incidents of sadness or sad events. And so we're able to look at not just what they say also but how they say it, analyze vocal prosody, the hesitancy that if they look down when they get answered. So there's such a rich amount of data that we can get about a person from these kinds of interactions that may be de-stigmatizing. People aren't worrying about being judged. They can talk freely. So I mean, whether you're using VR or you're analyzing online content or interacting with an agent, we have a whole new future ahead. I mean, it's gonna, you think psychology is science only been around 100, 125 years studying human behavior and interaction in real world. We're gonna need some time now to study human interaction and behavior now in the virtual world, in the visual world. You Google all sorts of health symptoms. You might be embarrassed to ask your doctor about what I have in my life. So one thing that's interesting about psychiatry is that while it's a science, when it comes to diagnosing someone, there's so much subjectivity. So in some ways, could the big data analytics that you guys are working on help clarify lines between diagnoses or even create new diagnoses? There's a possibility that we'll revise our diagnostic mythology. At this point, we have very little underlying reason to think that the categories that we use and adhere to are real in any kind of biological sense, at least. However, it's very important for us to come to a consensus about what's what so that we can talk to each other and do studies that can be replicated. But yes, I'm very hopeful that new approaches to looking at big data will help us figure out categories that are more predictive of what will go on over time and how someone will respond to treatment and cause us to spend less time dragging people through treatments that are ultimately ineffective. So that a little bit. I think one of the biggest potential of big data is the idea that it can be, various things can be measured objectively from it and those things can be measured in a granularity that is very difficult to do with survey instruments, which are really good, well-validated and they've been in use for many, many decades now, but then people get tired of doing surveys and you can't give them surveys very frequently, but they are very important to chalk out that treatment plan. So how do we do it? And that's where the big data and all of the things that can be had from big data can be really powerful because without the person being actively engaged, you can gather some of those measurements in a granularity and in longitudinal fashion and to even use them to do predictive stuff. So forecasting things that are in the future that haven't happened and that's one of the biggest potential of these kinds of methods. You've worked with tracking people who are postpartum, for instance, right? And so how does that sort of work in your approach? Yeah, so we were really surprised in a positive way to see that if you were just looking at the Facebook data of pregnant women over their course of pregnancy, and you can build a predictive model by looking at those language markers and other measures of interaction and predict whether they are likely to be at risk of postpartum depression before it even happens to them. So like, you didn't have to even wait for the onset of the condition and you could make a prediction with pretty good confidence before that. And I think these kinds of more proactive approaches to mental health are gonna be really helpful for us going forward. I mean, I know Facebook and Twitter, for instance, have tried to look into ways to use algorithms to perhaps flag people who might be at risk of suicide. I know of someone who was once flagged as a potential suicide risk by peers and then was very upset when Facebook told her or someone had done that. I mean, how do you approach someone and say, my algorithm is telling me that you might be at risk of postpartum depression? Is the prediction sort of hard to communicate to potential patients? Very hard and it raises lots of, we talked about privacy a little bit earlier. I think it raises ethical questions when you have these algorithms making these inferences which typically are made by therapists and clinicians. So I think these are the nuances that we need to think about. Like now that we have shown some feasibility, like some proof of concept projects across the board that these things are possible, how do you actually take them and help people? In my opinion, I think in the near future, the most benefit we are gonna get is probably to empower the clinicians and therapists with these algorithms instead of other entities engaging with people directly with those inferences. So we talked a little bit on the last panel about scalability. So in your wildest dreams, are these algorithms something that Facebook would run themselves or Twitter and other social networks would run or would they only be responsive to clinicians or should Twitter be hiring clinicians? Like can we actually get VR in almost every setting? Where, how do we sort of take this research and make it more widely available? We'll start with Skip and VR maybe. Well, I think we're at the very beginning stages but there's been a little bit of a backlash against some of this stuff recently and some of it's well-informed, some of it's neo-ludditism, I think. But our behavior in these social media settings are always being monitored and we're being pitched products might not be pitched the product of mental health. So with, I think you're referring to Chris Pullin's work with the suicide stuff on Facebook. Well, maybe you don't send the person a warning message. You're gonna commit suicide, run to your shrink. Maybe some of the ads that pop up are about self-help or providing options that under the radar give the person a message if there's hope. So I think one of the keys in understanding how this stuff can work is, and I think you alluded to it, we're not using these things for decision-making, we're using it for decision support. So a clinician gets this extra information that they wouldn't be privy to just from talking to the person for an hour once a week. They're getting an additive information and any good diagnosis and guidance for treatment involves multiple strands of evidence that inform your clinical decision-making. So this is stuff that's not gonna replace a clinician. It's gonna help us to do our jobs better with better information. And I think the problem is with doing it online, there's gotta be money involved who's gonna make money on it and all that. Are we a more noble society that puts this stuff in because Zuckerberg believes it's gonna have a healthier population and sell more product. I think it will happen, but it's gonna be an incremental thing and it'll be missed up so long the way. Sarah, when you make this point about decision versus decision support, especially around suicide assessment and prediction, I think it's important to actually underline the point about how little success we have in predicting suicide attempts and suicide completion and how very much we'd like to get better at that. And so I think that at this point, I think most clinicians have a no-better-than-chance prediction rate. And so we're doing badly and we need to get much, much better at this. And I think we're trying a variety of approaches, but any little help that was data-based and effective would be an enormous benefit to clinician-patient efforts to collaborate on preventing self-harm. Any thoughts on that? Absolutely. I think I emphasize the word empower here. I think it just echoed from all of the panel members. I think we are not saying that clinicians are no longer needed, but rather how can we empower them? And it would actually tackle some of the scalability things that we talked in the morning panel as well, that there are so few therapists and using VR and using big data to kind of help them look at more cases in a shorter amount of, limited amount of time. I think that would really be a promising future. So one question on a sort of separate subject is something that John wrote about in his sleep piece a bit, which is that a lot of the research here involves recruiting research participants from things like Mechanical Turk. You know, is that a limit to this? Like, is there a way to expand the pool of participants beyond Mechanical Turk? I'm excited to be able to get people on Mechanical Turk. So I mean, yes, I hope so. And also I think that, you know, if we want to do a behavioral study of people with a mental health condition and some controls in the lab, depending on resources, it could take us several years to collect a minimal size sample. And I recently had an experience where I replicated a minimal size human sample and I very much enjoyed meeting each of those people and I really felt I'd had an interaction with them. But then I tried to study on Mechanical Turk and I was able to in three months collect 3,000 people. And that changes the confidence I have about my conclusions. And I'm able to, by using several different self-report surveys that examine the same construct, really get a sense that I really am studying people who really do have this condition and I can collect language data from them and I can do psychological tasks. I think the idea would be to, yeah, go even bigger, get my task into an app that we can deploy much more broadly or at clinical settings across the country where clinicians maybe have seen and diagnosed the people in real time. And I think there's an idea here of not just deploying therapeutics but also deploying research so that we get a much broader sample and people who might not be on M-Turk or who might, of note, it does seem like there's a really pretty robust clinical population of people on M-Turk, perhaps because M-Turk's a good thing to do if you're having a harder time. I don't know, what do you think about this? Brad, I mean. I think M-Turk is already a huge step forward. Yeah, I'm sorry, does everyone know what Mechanical Turk is? I glossed past that. One of you like to explain? So one of the problems that we have is sort of growing our database of people who would like to and are able to participate in studies and there are various things that keep people from participating in studies, getting to the lab, hearing about the study, feeling well enough to come in, feeling well enough to come back, wanting to meet psychiatrists or come into a mental health hospital if we're gonna take. So Mechanical Turk is a large user community on Amazon.com that anybody with a credit card can become a member of and there are, I think millions of people participating now and a lot of, they're all over the world, but you can restrict the task that you need done to people based on various attributes, gender, previous participation, geography, age. And it is excellent for doing big jobs that need to be done by people. So say that a simple thing would be, say that you need to identify a certain object in photos that's better done by people than a computer at this point and you have gazillions of photos and you wanna get it done quickly. You can put this task for a small amount of money on M-Turk and lots of people will try it and do it and you can get duplicates and you can get the task done in a few days when otherwise it would have been impossible. And so running a psychological task now, 100 people will do it in an afternoon and you're done. And that's a miracle. How much would you say someone gets paid to do the test? It's like sense, right? This is a point of controversy. So M-Turk workers, there's a large community of M-Turk workers who feel this is work and should be paid at minimum wage. And some people do agree to do that. I would say general pay is probably more like 50 cents to a dollar an hour. I'm sorry, one moment back to the point you were making before. That's a very missed point. So I'm glad that we brought up what it is. Yeah, so I think these kind of proud marketplaces like Mechanical Turk are already kind of revolutionizing. A lot of fields where typically we have been, we have had a model where people would come in into a research lab and would do studies. So I mean, I think these things are gonna get bigger and bigger going forward. And also, again, going back to online resources, that's another place where we can get access to otherwise harder to reach populations which would be difficult from a research study perspective. I think the Mechanical Turk would be great. The ultimate big data thing would be to look at a user on that and be able to access all the previous data that they did or have them take baseline tests so that you have some quantification besides age and gender. I mean, we use Craigslist to try to solicit people to come into the lab for sometimes for our normal control group comparing with clinical populations. And we have to do screening when they come in because a lot of times the Craigslist people are not necessarily in the category of the normal control group. So we did a study looking at anxiety, PTSD, and depression and we had to do, when they came into the lab, we assumed they're normal. They're gonna be contrast with patient groups have to have them do screening tests and a lot of people that are kind of professional Craigslist research, sometimes they're a little shaky. No offense if you're out there, but, you know, so. But this may be another place where technology can help us, you know, like one group that I work with pushes surveys to people ahead of time, right, and we don't then trouble them to spend their time to come into the lab. We don't spend an afternoon doing the sessions and if we're able to look at those results and it gives us a first pass. When you guys look at media coverage of the kind of work you're doing, what sort of concerns you? What do you think that people are starting to misunderstand about this field? I can say a couple of words on that. So I think there are, you can interpret that in two different ways. I think there is a lot of enthusiasm in the press that computational psychiatry is a thing that computer science can indeed revolutionize mental health and other challenges that, you know, are still outstanding pretty much all over the world. But, you know, there's also this over enthusiasm can also signal some things that are not true. So I think we are, like I said, we are seeing a lot of things that are possible, but I don't think these technologies are in a place where they are a part of the clinical paradigm. I mean, that is, you know, a little while away. And before we reach there, one of the things that I always think that we, I wish we were a little bit careful in terms of the kind of stories that we are sharing with the broader world. That's something I know that. I mean, you've seen the hype cycle come and go with VR. Yeah, you know, I think, you know, as brought up in the last panel about awareness, the media is great for this. On the good side, two of the first levels of breaking down barriers to care is awareness and anticipated benefit. You know, the best treatment in the world, but if people aren't aware of it and don't think it's gonna help them, they're not gonna access it. So media, you know, does a good job of that. But by the same token, sometimes nobody wants to hear that, just like my journals, nobody wants to publish the negative findings, you know, and nobody wants to cover, unless it's a horrific negative finding that you can publish. You know, so the media looks for success stories, and that's good, and it builds awareness and positive anticipation. But, you know, sometimes there's things that, you know, when I get interviewed and you get a, you know, a big media hit a CNN or, you know, wash and post something, you get a ton of calls the next day, where can I get this PTSD treatment, or I get a kid with autism. It's like, whoa, this is a research study now, and we got some preliminary data that's real positive. And sometimes that gets left out of the story, you know, or if you're doing anything with a virtual human coach or, you know, a healthcare guide or concierge, if you will, it gets written up, a virtual clinician, virtual therapist, you know, it's like, no, we're not there yet, or not even yet. But, you know, just a rough and tumble part of it. I think in the end, ultimately, it's good. It's getting new ideas into the public consciousness, but you have to temper it sometimes. My apologies on behalf of the media. No, no, no apologies. Do you have any thoughts on that? Yeah, I guess I was using a little bit about our discussion earlier about stigma and peer support, and I think that technology has been really, really important, and I agree with what's been said already about the importance of sort of bolstering and even creating new peer support communities across the mental health community. And I sometimes wonder if the complexity of thinking about how to be in the world as a person with mental health challenges or to be a provider and sort of think about a comprehensive treatment plan that takes into account questions about peer support, questions about expertise, questions about medication, questions about therapy. I wonder if that complexity can get lost as we sort of jump on the bandwagon about a new medication or a new chat bot or a new peer support approach. I sometimes think that what I would really like to see is more conversation about how we can... I really, really loved your question because you asked where can, as I understood it, where do peers and people with lived experience fit into and integrate into research studies, not here's the research and here's the peer support somewhere, but I think sometimes there's a false wall that it's really our work to be taking that apart and I hope the media can help us with talking about how... the efforts we're doing to do that and how we want to do it more and better. That's a great segue into questions. Did anyone have questions? Thank you all so much. This has been really terrific. My name is Jessica and as someone who was a former NIMH research fellow, currently develops products for the American Psychological Association, also worked in startups and venture capital. Everything that you've talked about today has really resonated. So in working with a lot of technologists, I found that they can tend to overestimate the ease with which therapists, psychologists, psychiatrists can find these types of product services, technologies, and also the ease with which they can understand them because just speaking in generalities, they are people who have spent most of their life in the social sciences, not necessarily learning how to be fluent in technology. So what do you see as sort of the communication gap that needs to be overcome in terms of developing that fluency and also where the place is that they can look to sort of find these studies, these technologies, if it's not already sort of in their sphere? So really, really important question and I find myself thinking about them too coming from the other side, the computer technology side. I think there is indeed not enough avenues for the two communities to intermingle. I mean, this is an exception. I'm so grateful that this is happening. We need more events like this, gatherings like this where the communities can come together and exchange notes and whatnot. But at the same time, I think even from there is a research angle to it too. So a lot of the algorithms or the language analytic tools that are developed are kind of fairly opaque to anybody who is not an expert in those things. And usually that is fine, but we are talking about a context in which the therapists or even the patients talking about empowering the patients themselves. They need to know how these things work because they need to know how is it this algorithm is inferring, looking at these patterns in my language or looking at the way I behave. How is it that it's coming up with these estimates? And currently there is very little that we can do to help them through these questions. So I think an important research area which is kind of emerging in somewhere else but I think that's relevant here also. It's called Machine Learning Interpretability. So how can you incorporate transparency, interpretability into these kinds of technologies so that the people who are at the end of the day are the people who are going to benefit from them, they can understand and they can trust these technologies in a real scenario. Sarah is giving a question. I think your point is well taken here that how do you go from bench to bedside and how does the investment community drive something so people make money at the same time they're doing their funding work that is advanced. I think we really need to be very basic here and say what does the science tell us? What's the extant literature in support of some of this stuff? In VR, in the last two years there's been more VR mental health startups than in the previous 20 and a lot of them come out of the game ecosystem, game developers seeing that this might be a hot area and it's like well in that ecosystem you build a cool game, people like it, they pay to play and you're a success. When you're building apps or VR environments or whatever with technology you've got to appeal to a higher standard. You need the data to support what you're doing and I think that's where the app area ran into a lot of trouble. People just came up, oh mindfulness, oh cure your depression. It's like the digital barns and nobles self-help section. For better or worse we just have to VCs have got to look at the data. I'm with the cybersecurity initiative here at New America. Thank you so much for your thought-provoking remarks. I'm wondering if we could look forward maybe 10 or 15 years and if the panelists could offer their vision of a best case scenario as it applies to their own work or more broadly of the technology integration with your work and if you feel like it maybe offer a worst case scenario and if you have time what would be the sign points along the way that we were heading down one of those directions? Who wants to start? One thing we didn't touch on this morning that I'm excited about is real-time feedback and this has been done for quite a long time in terms of biofeedback so attaching the equivalent of a lie detector to someone and measuring their excitement or upset as they do various things or exposed to various things we're starting to move into doing neurofeedback and I think we're on the edge of doing real-time feedback about language analytics I think the language stuff is particularly exciting because it's at least from a physical perspective non-invasive we would want to be cautious for all the reasons we've discussed about the psychological and ethical intrusiveness but I think real-time feedback is one exciting direction which may yield faster results and may help with things like in-the-room modulation of therapeutic alliance which we know to be the strongest predictor of therapeutic success in psychotherapy Trouble, I think we've talked a lot about trouble this morning in terms of access to unreliable tools and leading people down potentially destructive paths I think that one of the things that expertise may provide and partly through training but also through being really someone outside of the situation is the ability to help someone know are you someone who's likely to benefit from this particular kind of situation or are you likely to be hurt? One of the ways that the condition that I'm interested in borderline personality disorder was initially diagnosed was that people actually got worse in therapies that helped other people so I think it is important to be clear all mental health problems aren't made alike just as all people aren't made alike and we want to be attentive to individuals but also the specifics of a situation and I think one worry I have is people self-diagnosing or diagnosing with their friends and maybe being exposed to treatments that may not be right for them and may actually lead them to a lot of expense or emotional hurt as they try to engage So I'll quickly add I think in terms of a positive vision I see that 15 years down the lane hopefully we'll have a more proactive attitude to helping people instead of a reactive ones and I think that will be a game changer in this field especially talking about adverse outcomes like suicides at the same time one of the challenges is also that if technology becomes such an integral part of the paradigm then a risk that we run into is we talked about people gaming technologies sometimes it's intentional but sometimes it can be unintentional as well for instance if somebody drops using all of these technologies and our entire system is dependent upon looking at those cues then we kind of lose that person lose contact with that person and that can be really dangerous So I think what is important to balance is to take what is the most powerful about these technologies and use them to bolster and make the current paradigm better instead of using it as a replacement You know I think the bright future is the integrated use of a lot of these technologies kind of all put together the current generation who grew up with an iPad in their hand at three years old will come to expect this being part of their health care and that's all good and technology is making it in our area with the VR you know these things eventually are going to be like toasters everyone is going to have a VR system in their home might not use it every day you got it there so there will be widespread access integration of the AI online behavior analysis tracking behavior all these things come together the downside I think is what you said is that it may lead to a tendency for people to think they can self diagnose and self treat and I recently wrote a piece with the IEEE AI framework for ethical use of technology and that is the primary concern I think you know it's sort of like if you get arrested for something and you decide to defend yourself not going to go lawyer the old saying is he who has how does he go up he who defends himself in a court of law has a fool for a lawyer and a fool for a client well I think it's no less important for self diagnosis and self treatment I think you still need well trained professionals to provide the use of these things you can push more independent use but I think it still needs to be supervised by a practicing clinician well trained clinician we have time for one more question we don't have time for one more question I'm sorry speakers might be around for another minute but thank you all so much for coming we really appreciate it if you're interested in our upcoming events please visit the new america website where you can register and follow us on twitter at future tense now thank you so much