 So I'm going to cover a slightly different topic. Feel free to look us up at sagebase.org. We're a non-profit based in Seattle at the Fred Hutchison Cancer Center. And what we do is we try to connect groups of people who have data and would like to share data with groups of people who like to analyze data. And so we have a set of different systems inside the organization, ranging from a data use and version control platform for people who really like to analyze data from a command property of about 8,000 regular users there. And we've just started branching into actually running observational studies through mobile devices, each of which the first two have been approved to enroll up to 100,000 people. And so we've had to develop a whole set of other governance capacities that let us enroll people in a way that we think is informing. So I'm going to focus on that piece and we can talk about some of the other stuff in the panel. So if you're going to enroll 100,000 people in a study that is only mediated through the mobile device, simply rendering the consent document as a PDF is not a very informing choice to make. So Sally alluded to interaction and persona-based design and that's actually what we spent most of 2014 doing. We did about 35 interviews in conjunction with Academy Health and the Electronic Data Methods Forum with a variety of stakeholders and did persona creation to see what are the kinds of things we could do to make informed consent more of an informing process than just a static document, given that we're not going to have the experience of the participant talking to a clinician and having them hand them a form. And what we came up with was very much inspired by the way software design works, which is to think about what's a process that you enroll someone over the course of using their phone that actually informs them enough about the study to make some choices. And so the way that it works is you download an app on your phone, one of the studies is in Parkinson's symptom variation, the other is in post chemotherapy cognitive impact, and one of the key ideas from the interviews was, well, if it's on my phone, I don't want to just swipe, write, or tap on everything. There needs to be something that brings the experience of the study back to the participant before they enroll in the study. So one of the things we do is we record sensor data on the phone during very specific tasks, right? So if you have Parkinson's disease, you have hand tremor. You have gait disabilities. Well, we have an enormously powerful way to measure that by saying, we're gonna send you a notification and ask you to walk 20 feet in one direction and walk 20 feet back, and we record the accelerometers and the gyroscopes on the phone to get a quantitative measure of your gait. We similarly ask you to say, ah, I'm to the microphone for 10 seconds at a time to get the muscle tone off your voice box. Now over time, over the course of a year, which is the timeline for the study, this gives us an incredibly quantitative picture of the progression of the Parkinson's symptoms. We're also doing surveys, right? So the idea was during your actual consent on the phone, we're not gonna assume that everyone understands that there are sensors on their phone that can be recorded this way. So you have to shake the phone at that point in the consent process to activate the sensors rather than tapping continue to move forward, right? We have an animation that's icon-based that shows you as you're going through a survey that you don't have to answer all the questions if you don't feel comfortable. And I came up with about 11 core screens that were generic to informed consent, in our opinion, for a low-risk observational study on the phone. And we use the same exact screens for two very different studies. One's this post-chemocognitive impact, one is Parkinson's. But the interface layer is consistent because they're very similar studies structurally. And indeed, we've been able to now work with three other institutions who are designing similar-style apps to say there's actually a chance for convergence at the interface layer for informed consent in a way that makes it more informing in many ways even than talking to a clinician because the clinician's often in a hurry and just hands you the phone and says, sign this or you're not in the study. What we're hoping comes out of this, and we've released all of these things on our website as an open-source toolkit. I mean, it's just a methodology. It's like a language that uses iconography instead of text to communicate. And it thinks about the process that you move through. So it's not something that we wanna proprietize. But what's an interesting side effect out of this is from the geek's perspective, if you start to stabilize on a set of icons and screens as these are sort of a baseline 11 screen consent interface. First of all, that makes it easier to do interoperability analysis because you can start to see structurally if someone inserts a weird piece of information or requirement that hurts later integration of the data. But it also gives the patient communities a vocabulary for what kinds of studies they wanna be involved in. And that's a vocabulary that's been very difficult to create using traditional legal text methods. If you can actually say, these icons, these restrictions or freedoms or kinds of data that we are comfortable with, this is what we want as a community. It can be a very powerful way to start signaling out to researchers so that you can get some of that matching done ahead of time. And one of the reasons that I wanted to bring this up is Jason's point earlier about failed relationships being sort of something that are really important to anticipate. And we've had, I see, we actually had a tough relationship with a patient advocacy group. And the problem was that we were going too slow. We were developing on sort of a research timeline, going through a design process and a way to build something that we thought would be much more effective for them. But it took so long that they just got really angry, really, really emotionally angry with us. And we didn't hear that until so late in the process that it was really hard to pull it out. And so my hope is that there's this real benefit to treating consent as a process, not sort of a static moment where you say yes or no to a document. But the reality is when you design a really large study, you have to have that cohort be analyzable, the data need to be distributed so you can verify the sorts of predictive models that get built on it. But you can start to create signals so that someone who's going through the process understands if it matches their individual values. And you can create a vocabulary so that communities can declare their values in advance. I think that's gonna help build relationships that fit because the sorts of uses that we make of data at SAGE really require fairly broad long-term consent because we wanna build predictive models, file those predictive models with publishers and have the data that led to the conclusions that were published be replicatable and reanalyzable. So it's really hard to offer people the opportunity to come back and delete their data or delete their consent and be consistent with the scientific process that we're taking forward. So it's really important for us to be able to connect with communities and know at the beginning that the values that they have match with the scientific processes that we have. Because there's implicit values buried in the way that certain kinds of science get done, especially large-scale data science, that are not gonna be comfortable to all communities. And I think it'll help all of us if we can do some of that impedance matching up front. And I think I have like one minute left, or am I done? Okay, so just the last piece of this, which is that we think a lot of people are gonna do these sorts of app-based longitudinal observational studies because they're easy. We're standing up too and we don't have any funding. That's how cheap it is to roll this sort of stuff out. And we're gonna be making all of the source code, all of the web services, all these things available to other people who'd like to run such kinds of studies. But the thumb that we're gonna put on the scale, the tax we're gonna put on the scale is that no one's gonna be able to use the architecture if they don't return all the data to the participant on demand in an exportable format. Right, so we're making this sort of unitary demands on the cohort from a science perspective, but the deal is if you want that, you have to give complete access to the data back to the participant, and not just as a downloadable tar ball, right? What's more important is you have to be able to push the data to a third party, like peer or some of the other platforms we've heard, where you can manage your data as a participant once it's gotten outside of that clinical research context. I'll stop there.