 If you're tired of me, I'm going to introduce Gideon Litchfield, who's going to moderate the second half of this. Gideon is a senior editor at Quartz, and he's also a fellow at the Data and Society Research Institute, where he writes science fiction about the near future. Thank you very much. And I think we're going to need one more chair, right? So without further ado, I'm going to introduce Nick and Jacqueline to come up onto the audience. So Jacqueline is a historian of quantification. She is the author of How to Do Things with Numbers, Histories of Quantified Cultures and Lives, and assistant professor of English at Arizona State University, and a fellow of the Lincoln Center for Applied Ethics. And Nick is assistant professor at the University of Maryland's College Park College of Journalism, and member of the Human-Computer Interaction Lab also at the university. So thank you both very much for joining me. And our session is called Quantifying Ourselves to Feed the Algorithm. And I guess there are two ways in which we quantify ourselves or are quantified. And I want to start with you, Jacqueline, talking about the way in which we quantify ourselves, because the bit that we're then going to come to is how essentially other things, other people or other businesses quantify us and what happens to that data. So talk about quantifying ourselves. How has that evolved? And where do you think we are at now in terms of our level of self-quantification, which is probably mostly just for nerds at this point maybe, but are we all going to be super self-quantified very soon? Well, I don't know. I mean, I think the... So one of the things that I work on is the long history of quantifying human behaviors and human lives. And so I start in the 16th and 17th century with the rise of death mortality counts, essentially. And so that's very much a moment in which other people are counting other people. So people aren't quantifying themselves. But one of the things that I do is create a link between today's quantified self-movement and the early life writing. So confessional writing, other kinds of autobiographical modes, which very often took the form of receipts, record books, accounts of days, things like this that you can think of someone like Samuel Peeps and his compulsive need to write himself down. And he writes it down in three different forms before it finally becomes something like his diary. So this move to record our lives in order to see them is not new, right? But this push to... In the jawbone advertising, there's a better you out there, go find it. Or in other things that know yourself better, right? So the know thyself dictum is very old. But the know yourself better is this idea that a kind of abstraction and quantification will provide insight. And this goes back to some of the things that people were talking about earlier, right? That somehow there's a hope that the computational will provide better insight than the human mind, right? So to go back to your like, should you trust that person? So the move now is very much about... It's very grounded in sports and athletics. It's very grounded in bodies and the ways that bodies behave. But it's an interesting abstraction of bodily behavior into something that gets fed up into a cloud compared against metrics that are much larger than the individual. One of the things that I talk about is that the modern quantified self movement is very much about outsourcing the labor of counting. And this I think we'll get to when we talk about companies. But with tracking devices, much of what people are doing is providing companies with that data for free, right? And not even for free in the case of something like Nike Plus or other devices that you might buy. We are paying to get the devices in order to then give our data for free. So there's a kind of outsourcing of the labor activity that I think is particularly interesting in our modern moment. Yeah, so would you say that at this point the biggest beneficiary of our move to know ourselves better is us? Or is the companies that use that data? And do you think, and I don't know, you may have a view on this as well, Nick, but do you think that there is starting or there will start to be a backlash against the extent to which those companies that collect our data benefit from it? I think the rhetoric inside of the QS movement is very much about self-empowerment. And so I think most of the people who are utilizing it think that they are empowering themselves. My own sense as a scholar is that that's part of the packaging that makes the dispersion so powerful and possible. So I think really who's benefiting at a kind of monetary and power level are large corporations and governments or multinational entities, even as individuals feel empowered. But I would push back at that and say, I mean, certainly it is part of the rhetoric of these quantified self-services, but at a very human level there is a lot of utility to being able to optimize behavior and these products like Automatic, which is a car system that helps you optimize your driving or Spire, which is the Fitbit-like thing that watches your breathing and helps you calm down and be less stressful. I mean, we put these things on and we attach these things to our cars and whatnot because in some form we want to feel like we're in control of ourselves and we're using these techniques to exert agency in these situations. I think you're absolutely right. And the logic of self-mastery has been there since the Augustine tradition. So this is a really old impulse I think in Western culture in particular. It's interesting that you brought up the case of the Spire. So one of the things that I've been doing is getting all of the devices and wearing them around. And the Spire, it attaches, all of these devices have very gendered protocols. So the Spire, the two places where you're encouraged to wear it as a woman is either on your bra, right in the middle, on your breastbone, or right at your pelvis, on your pants, on your waist pants. And then it buzzes you when you're tense. And I don't know about the rest of you, but I have to say that something, a vibrating stone on my breastbone is like the worst possible kind of feedback mechanism. I could have imagined for tension, right? And so I think there's a really interesting disconnect between design and the conceptual ideas that go into our quantified self-technologies and then their actual use. We are seeing some pushback against this, right? I mean, there's various kind of attempts to subvert some of these tracking technologies. I was sort of interested in this. I saw recently on Fitbits, which is a project where basically they're designing interventions that help hack the Fitbit and make it look like you're being more active when in fact maybe you're not. And the motivation is that, well, insurance companies are now starting to give discounts for physical activity. And so if you're not the physical activity type, but you want the discount, maybe on Fitbits is the kind of thing for you. There's other projects going on like Adnausium that I'm familiar with. Helen Nissenbaum at NYU is working on this, which is essentially a browser plug-in, which will automatically click every ad in the browser for you. And this is sort of as a subversive way to create a lot of noise in the advertising network so that instead of being tracked and the advertising network knowing exactly what ads you've clicked on and what products you're interested in, it kind of just washes out and they're not able to tell anything about you. There are other versions of that like ghostry that block, they do the opposite. They try to create perfect silence. It doesn't always work. But I think you're absolutely right that there's a ton of effort right now to subvert these various tools. And coming back to the literacy point from earlier, I think that's actually a really important form of literacy that we can develop in the public is how do you actually poke and prod these systems how do you subvert them? How do you get the algorithm to treat you the way that you want to be treated? Well, there's an aspect. So in the backlash I can sort of see that there's an aspect of... There are two things pushing you to take part in this kind of self-quantification ecosystem. And one is the economic pressure, if you like, or the technological pressure, which is things like insurance companies saying if you don't provide us with data then you'll have higher premiums. And the other is the social pressure, like look all the people in my office are wearing Fitbits and I'm the only one who's not and they must think I'm a slob or that I'm not taking enough care of my body. And so I can see also, I was having a discussion with someone recently where they said I think that in the future there will be communities of people who are totally wired in and then there will be communities of people who are off the grid and they refuse to have anything to do with this kind of technology. So it seems to me that there are in fact two ways of rejection. One is to go off the grid and try and block it. And the other, as you say, is to kind of subvert it, to provide the data but find ways to do it so that they're not real data or that they're disguising something. Do you think both of those will happen? Well, there's maybe a third method as well, which is kind of sandboxing, which is something that I've tried to do when I start tracking myself. So after I think the third Fitbit that went through the wash I said I've had it. I'm not doing Fitbit anymore. And I started using the Moves app, which is owned by Facebook. So I guess Facebook now knows how much activity I do and it tracks me around the city and it can tell me how long of a bike ride I've taken and so on. But I'm very careful with that app in particular not to log in. I don't link it to Facebook. I don't log in at all. So there's really no connection to the outside world. I'm just using it for my own visualization of my activity. And I think that's an interesting strategy that is perhaps effective for not necessarily subverting but also not necessarily bowing down to the interests of these corporate actors. Realistically, I suspect that very few people will be as technologically as sophisticated as you are to do that kind of thing. So if we talk about the majority of people who end up one way or another being very closely quantified by these companies so what other ways are there in that case for them to assert more control over their data or at least have more knowledge about how it's used? I think this is a very tricky domain and I would say in the ecology that you imagined where there are the super-engagers and then the non-engagers there are a whole field of people who already exist who are being quantified by others but who don't themselves have access to these technologies. So you think about advertising firms that buy data about mortgages, etc. in the Detroit area and then you end up with targeted advertising for subprime loans to those individuals which sort of perpetuates the cycle that we originally saw in redlining. You also have immigrant populations that are moving across the US-Mexico border that are being tracked, quantified. The move to share data internationally actually is rapidly growing and the amount of NGO and international coalition efforts to quantify numbers of bodies and their behaviors in the global south for example I think should give us a lot of pause about who even knows they're being quantified. So I use my Facebook or I bought something at that store, there's that level. I have a Safeway card, there's that level. I use my app and I connect it, there's that because there's a community aspect to much of this talking, sharing, you were talking about the office peer pressure and some of it's about competing with one another. But there's a whole tier of people for whom all of those levels of access are not even possible because they are the people who are being quantified about, not the people who are doing any of the quantifying. And I think an important dimension of that is actually transparency and part of this actually is going to fall back on the corporations and the other organizations, governments and so on, various institutions that are quantifying people is to develop mechanisms and standards for what you can disclose about the data, about the algorithms. So what are the strings on the marionette that are even being pulled? So earlier this year at the Tau Center for Digital Journalism at Columbia I organized a workshop on algorithmic transparency in the media and the goal was really to start brainstorming and just writing everything down that we could possibly think of that could be disclosed about things like the Facebook newsfeed or things like automated news writing algorithms or automated curation algorithms. And something that came up over and over again was this notion of the algorithmic presence. Am I in a space where I'm being watched, where I'm being incorporated into some kind of algorithmic system? Is there a way for me to turn it on or off? Can I see what the system looks like to someone else? So kind of stepping into someone else's shoes. So that whole personalization aspect of what do I see versus what does someone else see? And, you know, I think there's certainly a lot of follow-up work to do in this space of algorithmic transparency but we're starting to see the first inklings that I think there will be a demand for this kind of information disclosure. Is there any examples that you can point to right now of how that's starting to happen? In other words, people's awareness of their algorithmic presence as you put it? People's awareness of their algorithmic presence. Well, the systems that allow them to have that insight or that visibility into where their data is being used and how they're being operated on? Yeah, I mean, I think there aren't great examples out there. There are some, I would say, prototype level design efforts. Some things that I've worked on, for instance, around online ranking of things and how could you actually communicate the idea that an online ranking is composed of a dozen different pieces of data that have been reweighted and sort of form into this index? I think there's actually a lot of user interface work that could be done to sort of figure out what are the right ways to signal to people because you don't want to overwhelm them with too much, right? You don't want to kind of just create this totally overbearing information space where now they're not interested in your product at all. These bits of information kind of need to be integrated into the individual tasks that people are doing with their applications. There's a sort of backdrop to all of this which is that a lot of the services that we're talking about whether it's Fitbits where you are notionally quantifying yourself and things like that or whether it's collection of your data by other companies. There's been this sort of implicit transaction or this implicit, I don't even know if to call it agreement or conception that your data is basically free or doesn't have value. So you agree to give up your data in return for a service and that service you're calling that service free like Facebook or Gmail or whatever else. You treat that service as free and you give up your data because you don't think of the data as having value. And that is essentially why a lot of the things we're discussing are happening because people don't treat their data as an object of value that they should or they don't perceive or they perceive the service that they're getting for it as free as not an object of value. In other words, people are not thinking about paying money, they're paying in data but they're not thinking that is actually paying. Is there something that could shift that perception for people? I think the more I talk about in places where people talk about how much money other companies pay in order to get that data, I think that helps raise awareness. Sarah Watson does some really great work on glitches and how glitches are the space where you begin to see the circulation of your data separate from yourself. She has this insanely poignant one that I think is so powerful. A man opens up his mailbox, gets a piece of mail that's addressed to him, his name, his address and in between the two of that is eldest daughter died of car crash and it was from an auto insurance company. And so what had happened was that someone bought a data set that was about people whose family members had died or been injured in car accidents and then targeted them with their advertisements. And of course in this particular instance, he's confronted not only with the knowledge that that data is out there circulating but also with a kind of, to go back to the notion of haunting, a kind of haunting by his now deceased daughter. And so she's done really good work with that. I think we recently did a thing where we turned people's, we called it data shed, so how much information their phone is sort of spewing outwardly at any given moment. We did a installation performance where we turned that into both sound and vibration feedback, so sonic and haptification. And so people could walk around and feel that even when they weren't doing something on their phone, their phone was still talking and sharing data. And that, it had a pretty, I mean it was a very initial, early stage event, but it was affecting for people, right? They were like, oh, why, like my phone is doing that now? Who's it talking to now? What's happening, right? And so that peaks a certain kind of curiosity which I think can drive literacy in some ways. But it also just kind of creeped people out. Some people were like, well, I don't, I guess there's nothing I can do about that. And they kind of throw up their hands. So I think this is a kind of thorny problem. How do you get people to care enough and to then have the literacy that allows them to understand the transparency? Yes, although it also seems to me that when you inform people, when they learn about the things that are going on, they often are shocked at first, but then they accept it pretty quickly. They say, well, okay, so yes, you know, my credit card was on this database that was hacked or sure I'm using the same password everywhere and it's probably been compromised, or sure all of these things that I thought were private were set to private and not actually private. But then they go, well, you know, what I'm part of, the system, I can't fight it. They don't have a pathway. There seems to be a sort of apathy. Yeah, perhaps there is not that very salient example that would kind of get people to wake up and realize that they were losing some privacy there. I mean, I recall John Oliver, when he visited Ed Snowden, tried to use the notion of a dick pic to get people to kind of perk up and say, the government can see all of your dick pics, right? I mean, and try to just make it really get everyone's attention and say, like, yeah, you know, you want to talk about privacy, this is what the government can see about you. But I don't know that we have a lot of salient examples that would really get people to perk up and start thinking about what they're doing. And I think the reality is, you know, the value in the power really is in the aggregation of this data. You know, you think about something like 23andMe. Sure, I'm interested to see my genome sequenced and compared to other people and so on, but the scientific value of having, you know, tens of thousands of those genomes is immense and obviously could lead to all kinds of breakthroughs. So, you know, in some cases it might be worth it. I think there's an interesting, there's a case recently with respect to genomic data and privacy. I think people very often think, oh, like that ship has sailed or in order to get this information, I need to give this thing, right? There was a study done of DNA in Iceland, right? It was a very small population, which they had enough data from people who volunteered blood samples to then reverse engineer the profiles of everyone else in the nation, right? And this idea that there is the possibility of you being opted in simply by the volume of information, right? Like there is no opt-out, right, in that situation. And I think those are moments where it's very difficult and people can feel quite thwarted, right? But I think there's also to go back to this idea of statistical insight that came up earlier. I think people have a hard time feeling like they have full knowledge and authority in order to understand what's happening and so then feel thwarted in their attempts to resist. I mean, I think this issue of being watched and being, you know, data-fied is also an interesting one. So, you know, certainly we can invoke the Facebook emotional contagion study, which again did have, you know, a bit of backlash, which eventually kind of, you know, petered out. But, I mean, this is happening in a lot of different areas of online communication. So Facebook is doing it, they're reading your messages, but the New York Times is doing it, right? So, you know, if you write a comment on the New York Times, they're going to start putting your comment through an auto-verifier that decides whether or not you're going to be verified and whether or not your comment is going to be posted immediately to the site. And the way those kinds of algorithms and the way that your comments are data-fied and that decision is made is consequential to the types of conversations that will emerge. There are the kinds of discourse that are privileged, you know, are you privileging things that are written in more sophisticated language, for instance, or more readable, these can obviously have, you know, they can bias the direction of things. I'm also reminded of another example of this type of technology, sentiment analysis, where algorithms will, you know, look at your tweets and try to figure out if you're talking about this product in a positive or negative way. And what research has shown is that these methods for detecting sentiment in short messages are severely skewed toward male versions of sort of highly salient, maybe perhaps more aggressive types of expressions of sentiment. And so, you know, if you're looking to do product research on Twitter, for instance, you would want to be aware of these types of biases and what you're leaving out or what you're actually able to quantify and operationalize there. We're starting to touch now on a question that I actually want to say for the next session, which is essentially, all right, if all of this data is being collected and used and if there is nothing you can do about it, what can you at least do about the ways in which that data might be used, in which different people might be affected disproportionately? In other words, the disparate impact theory, the idea that some people, just by the nature of the data that's collected on them, are going to be more disadvantaged than others. So I don't really want to get into that right now unless you have some particular thoughts on it. I mean, the one thing that I would say is that there are ways that people can do something about it and I'll just give a quick shout out to Take Back the Tech, just put out a DIY guide to cybersecurity that is money and it's meant for sort of the average user. So there are certainly people, especially people in feminist, trans and queer spaces who are working very hard on some of these issues of how do you rein in the sort of data bleed. But I think also, you know, the question of bias or disproportionate impact, intersectionality with respect to data, I actually think it's a super important problem that hasn't been grappled with sufficiently. But I won't say more unless you... Well, actually, since you brought it up, I mean, you've brought up something, you've probably something called, I think, the intersectional data manifesto. Can you talk a bit about that? Sure. So that came out of a meeting at the Berkman Center with an amazing group of people who were there and that particular manifesto was a collaboration between myself and Jamia Wilson and Tanya Peterson. And the idea there is that data models, data as an abstraction is a way of modeling the world or behaviors or interactions. And one of the things that I feel like we see in much of the popular deployment of algorithm and algorithmic culture is an assumption of a kind of universal white, upper middle-class, male, cis, able-bodied version of things. This is certainly true, I think, when you're talking about the Fitbit spaces, right, where you're putting your data up against an aggregate and that aggregate is fundamentally limited in its scope. And so one of the things that data analysis can do very well and we see this in computational biology is deal with very complex systems. And intersectionality is perhaps the most complicated of the systems. The idea that we come from a series of overlapping different points in which power is sort of inflected over and around and through different people in different ways. And I think it would be really interesting to think about what it would mean to demand intersectional data, right, to say, yes, it's complex, we get that. I was just at an event where someone from the Internet Archive said, oh, that's too complex, we can't do it. And I thought, that's hilarious. Of all the things in all the spaces, right? So I think there might be a way of drawing on the power of complex computational systems to get at some of those things that, as you were saying earlier, drawing has a hard time with, right? Dealing with, you know, an intersectional subject position and the many of those that exist in any given room is intensely complicated, right? But we actually have tools that can handle that kind of complexity. And it would be nice to see our data models go towards that richness rather than pull away from it. And I would just sort of add to that. I think this sort of brings in maybe another dimension of algorithms and data that are sort of in different groups and also ties in a little bit to bias is, you know, we talk about algorithms as recipes and I certainly think that's an apt sort of characterization of most algorithms, but there's also this whole class of algorithms, of machine learning algorithms, which are really more like recipes for learning other recipes, right? So, you know, if you imagine writing down a recipe that described how to learn how to make a pizza, for instance, so you're not, you know, just following the steps for making a pizza, you're following steps for learning how to make a pizza. So that might involve, you know, watching the chef need the dough, watching the chef toss the dough, you know, looking at the ingredients that go into the sauce and what types of cheese and so on. And, you know, if you set that recipe for learning recipes loose in Chicago, you would learn one type of recipe for a pizza, but that would be a different type of recipe than if you set a loose in Connecticut, right? So, basically, the data that we're feeding into these machine learning algorithms can lead to very different types of recipes that are learned and that brings up the notion of needing to be aware of the types of biases that are trained into these systems and realizing, again, the sort of intersections of those. I want to come back to what you said at the beginning, Jacqueline, you know, talking about quantification as an outgrowth of journaling and of that kind of, you know, Samuel Peeps and so forth. Do we get to know ourselves better by quantifying ourselves or are we losing certain kinds of knowledge or, indeed, are there categories that are being created too narrow or too simplistic? That's a really complicated question. I mean, I think, you know, the rhetoric is certainly that you know yourself better, in part because the idea is that by measuring yourself against others, you have a better sense, almost like a dispassionate, objective sense of who you are and how you operate. As a historian of science, you know, I have spent a lot of time thinking about how that myth of dispassionate, objective knowledge was produced, right, partly by the sort of rise and mercantilism in the imperial state in Britain in the early part of the 17th century. So I'm very reticent to say that we know ourselves better in the course of quantification. I think there are certainly things that cannot be quantified, right, so the things that computation can do are not total, right, and I think there is a real danger in our sort of long fascination with total knowledge and how quantification feeds into that. You know, things like thinking about sentiment or affect or trust, safety, you know, a full and rich good life, I think those things are awfully hard to break down into numbers, but I think there's an interesting interplay between words and numbers that I don't think we can get rid of one or the other. I don't think one stands out. I think it's an ongoing cultural interplay. Right, there's also another side to it which I'm thinking of which is dating, so anyone who's used a dating app or a dating website has done, at the very least, a limited amount of self-quantification because they've probably put down their height and their weight, and so that's already parameterizing our choices to an extent. I can imagine a dating app in the future that, you know, I don't use four-square check-ins, but let's say I did use four-square check-ins and a dating app collected all my check-ins and then kind of did some kind of comparison with someone else to see essentially how compatible we are in the places we travel, and maybe it helps me make choices or helps them make choices of who to talk to based on things like that and then maybe other things that we've quantified about ourselves to the point where the serendipitous nature and the qualitative nature of identifying who might be a good mate starts to be stripped out. Would that be bad is my question, I guess, because it would certainly be different, but would it be bad? E harmony would have you believe it's a good thing, right? I mean, they facilitate, I don't know, I don't know what their numbers are, thousands at least of marriages every year, but what's the quality of those marriages? I think they would say they're better than the sort of the, in-the-wild types of meetings that you would have. That might be right. So, you know, I don't know, but, you know, could they make mistakes? There was this great New Yorker cartoon last year of this woman, you know, woman and a guy getting a coffee at a coffee shop and she's kind of looking back at him annoyedly saying, how could an algorithm think that we'd be a good match, right? And I think it's really an astute observation that, you know, we quantify because it sort of facilitates these kinds of interactions, but there's sort of no guarantee of accuracy or that it will be less error prone. Well, I want to maybe suggest that, you know, algorithms are on average quite good at predicting who would be a good match, so we'll have less of those really crazy, intense relationships that drive us nuts but are kind of life-changing, but maybe fewer conflicts and marital murders, I don't know. I don't know. I mean, if we go back to the Marionette metaphor, I don't, you know, if the algorithm is just a kind of a puppet playing things, I think, you know, I definitely don't ascribe to these things are bad, you know, and I think there's an interesting thing about the tyranny part of our title, right? Insofar as I think quantifying is about state power and state control, I think it's apt, but I think people are also afraid of it, perhaps unnecessarily, and I don't want to say that I think that the quantification, especially with matchmaking, is bad because it seems to turn out well for people, but I think anytime we tip towards a single way of knowing, especially a single way of knowing that depends on models of abstraction that thus far have largely been controlled by a relatively narrow population, I think that makes things incredibly difficult, right? So if you, you know, if we have a 100,000 algorithms written by 100,000 people for 100,000 different things, that seems that kind of diversity, even in the algorithmic programming to me, seems like it would be a benefit rather than a kind of homogenizing control and command figure. Right, and that's actually something that we haven't yet mentioned in this discussion, but it is also in the background, which as a friend of mine from Silicon Valley likes to say, and he belongs to the set that he's talking about, there's a few dozen young white guys who are basically writing the rules for the entire world now, and that is introducing all sorts of biases. We have a few minutes maybe for questions. Does anybody want to ask something? Up here, and then who else? We have a microphone, thanks. They're live streaming, that's why I think they like you. One type of quantified self that actually didn't get discussed at all is the medical capacity side of it. Like I actually feel like I keep hearing about these new kind of startups that are trying to make these apps that do things like if you're diabetic, they like have a little thing that'll just track your blood sugar through your skin or something. I might have made that one up, but like stuff like that where it's very, it just, it's actual medical and it's a clear medical benefit, and I guess to me, I kind of always thought that was the longer term direction of quantified self was things that were actually just like, you know, you need to take your insulin now, or like, you know, like you're really like, your heart needs some more exercise, like you're going to be screwed, you know? And I don't know, I guess I'm just kind of curious if you guys, what you see as the evolution there, what you think about that kind of the more medically space. I mean, I think it is clear that a lot of R&D and a lot of innovation around quantified self and wearable technologies in particular, right? So wearables start in some way with the military and, you know, and hospitals. And there's a, I think there is some real benefit to that. I think it falls in the same, for me, the same line of the history of science that has from the very beginning drawn on a one-sex model, right? So the male model is the standard, and we see this in clinical trials, we see this in, you know, the way that things are adjudicated as norms, right, with respect to health. And I think, you know, the idea that these are all sort of neutral goods, I think has to be troubled a little bit by that long history, right? So there's an example right now of a device that was in Kickstarter called the Loon Cup, L-O-O-N, which is a menstrual cup that's Bluetooth enabled. And it's a perfect example, and many of the women in the audience are shaking their head like seriously, WTF. I think among the things that it's such a great encapsulation of is issues around privacy, right? So the device has to stick out of the body in order for the Bluetooth to work. It can't go through airport security because you'll be asked to remove it because they'll think it's a bomb. And the things that it measures. So it seems like a perfectly benign idea until you start to really think about it, right? So it's like, oh, well, people have irregular menstrual periods. They want to track it. People have been doing this for a very long time using pen and paper. We're just making it easier, right? But the things that it tracks, so it has an RBG sensor in it, there's not any clear diagnostic value to knowing the RBG value of your menstrual blood. The Bluetooth-enabled feature sends texts to a phone about when you need to empty your menstrual cup and I think for most women in the room that is not something that we need a phone to tell us about. But it also could potentially be used to tell your partner when you're available for sex, right? And that is a, like, unintended consequences of things I think is really dangerous, especially when we're talking about automated technologies that are positioned as things to make our lives easier but might also compromise us in certain ways. So I think there's a lot of opportunity in the bio-design space and the healthcare space. I also think it's super tricky, right? Like, the first thing we did was go out and buy one of these loon cups. It'll be coming in January so that we can figure out how you can hack it, right? And, like, what could be done with a hacked loon cup? I'll let you know. I'm looking for it here. Can you talk for a second? Mm-hmm. Yes? Jacqueline mentioned the, you know, centuries-old tradition of recording information through journaling or such. And it seems that was, you know, qualified self, I guess. And whereas now what we're dealing with is how much and when and where and maybe with whom, but there's nothing about a why. And that old way was a why. And I'm wondering if that's something that we're just in this tech area where we're all thinking about numbers. Are we gonna have something that helps us with those whys? That's a algorithm or that's a a qualified self or some other tools? So that's an interesting way of framing it. You know, when I think about something like the spiritual exercises that we have in the Jesuit tradition and stuff, that actually was not very narrative despite being written in prose, right? The process was about counting the number of times that you did something in walking through the stations of the cross and marking where you were when. And Samuel Peeps also interestingly, and this is something I'm totally fascinated by, many of the early essayists and life writers write things down as financial records first and then recopy it adding detail, recopy it adding detail, recopy it adding detail. So Peeps, for example, writes everything in a military log where it's just transactional and then literally transcribes it three different times into three different forms and what we have as his diary is the final narrative version of that. So I think there's always been this interplay between the quantitative and the qualitative or between the numerical and the prose or word based. You know, and the people who are in the QS movement and you might be able to speak to this a little bit, now real devotees think that, like they have the why, they build in the why. I think it's more in the sort of popular culture deployment that we see less of that why, but I think it's always there. I think explanation is actually a fascinating area to go to move toward in this domain because people do always want to know why, why, why. And I think the more that we can sort of open up these systems, explain connections between your data and other data, do so in sort of human readable terms, I think that's going to help certainly the user experience for these kinds of systems and hopefully make people feel more at ease with how their data is being used. Thank you. Great, well thank you both very much, Jacqueline and Nick.