 So, I first started getting into public health and platforms in 2011, when I started becoming more aware of my own health, I started becoming a runner, and I joined Tumblr to document and journal my own athletic endeavors, which was a very motivating and healthy practice for me. I got a lot of external motivation and validation for my fitness goals. This is me, after eating four burritos and running one mile in 35 minutes, which is not the healthiest thing to do, and lots of people cheered me on. But I think I started to notice on Tumblr that there was this kind of like crossover between being part of a health-focused community and another kind of health-focused community that was on the more negative side, promoting self-harm and eating disorder. And in fact, these communities kind of rode this gray area. There was overlap between a pro-health community, pro-recovery community, and another kind of community that was more embracing and glorifying of unhealthy behaviors and actions using memes and selfies. It got so challenging and difficult for the platform of Tumblr that they instituted policy prohibiting self-harm content in 2012. And when you search for pro-self-harm content, it would show you a PSA. Now this isn't the first time that a platform has had to deal with this. In fact, in 2000, the early 2000s, Yahoo and AOL passed prohibition against this kind of content. And this is what you see on Instagram today when you surface anorexia, it gives you a PSA, and also promotes pro-health content. So I'm interested in how platforms respond to this issue, because I feel like you can see a similar dynamic and almost a pattern play out across public health as a whole arena. For instance, if you look at vaccines, the measles, measles was effectively eradicated in the United States in 2000. And we just this year, since the beginning of this year, saw 700 cases reported as of now in 22 states. So it's in part due to things like this, these memes that Nat Guinness and Anja Amina call part of a misinfo-demic, a spread of a particular health outcome or disease facilitated by viral misinformation. And as with self-harm content, there's a response on platforms. There's the kind of grassroots counter-speech from people challenging misinformation or with humor. I like this on the right here where the original poster's mom says, no, you are fully vaccinated. This is embarrassing. So there's fun ways to deal with it from a grassroots side. But then the platforms themselves also take their own approach. There's this kind of de-platforming, banning certain forms of conversation. Looking at these types of contents, I feel like we're finding challenging as part of a pattern can show us why it keeps emerging and how effective our responses are. So if you look at this graph, you kind of start with a map of engagement around misinformation, moving down to normalization of the behavior and then eventually extremism alongside trust of existing institutions. And this maps well to the vaccine case. You start with somebody, maybe a concerned parent, who looks at vaccine misinfo and says, oh, I'm concerned for the health of my children. Eventually, as they dive deeper and deeper into the rabbit hole, they become resistant and eventually participate in misinfo. Another way to look at this is some researchers have looked at conversations around vaccines on Facebook and they find increasingly kind of nuanced motivations for how people are embracing this kind of content. So what should platforms be doing to reduce the spread of harmful information around public health? So I have a few different ideas. Basically, I think that there should be, number one, engaging the public health community, supporting efforts with counter speech and also getting help from the public health community to establish good practices. Second, I feel like we should avoid some instincts to look at this as an all or nothing approach. There may be cases where the blunt force of banning is a really good idea and there may be cases where we might benefit more from intervening in these communities. Third, I would also think about every feature on your platform as its own platform. So not just the stream of information on Facebook but the recommendation algorithms, the engagement, the likes, those are all their own little ways of engaging and they can all be used for harm. Fourth, and finally, I'd also consider the public health approach for all harmful behavior on platforms. So think about radicalization and hate speech and dehumanization using the same kind of approach. Thank you very much. I really appreciate your time. Thank you.