 Good morning. There's so much springtime energy in the room, so much sunshine inside and outside. Welcome. Welcome to Design to Thrive, the futures of behavior design for well-being. So even when we aren't using our eyes, they're active. Phosphines are the little light show of stars and geometric patterns that seeing folks have when we close our eyes, as our brain digests and processes the things that we've seen throughout the day. Even people who've lost their sense of vision but have had it at one time can see these patterns. And physical manipulation can generate them. You can try it with me, are you ready? So close your eyes, very gently, gently. Touch them with your fingertips, and then move your little fingers around. Or you can rub your eyes with the back of your hands like you do when you're tired. Do you see the little lights? You can see the bits of lights move around where you touch them. Those are phosphines. So they can appear as geometric patterns. They can also appear as random spots of color. There have actually been researchers who've tried to design the patterns and give you something like 15 different shapes. And even without applying pressure manually and without some visual information to process, our inactive eyes and our processing brain will also see patterns when we close our eyes. And that's because our body also produces its own light. You're all sitting here producing your own light. Bio photons are the tiny little electric charges that are emitted by our resting retinas. It's sort of a visual noise. And research suggests that retinal noise occurs not in response to zero light, but rather in response to a very specific type of light. Your own light, the light you generate with your body. So, phosphine patterns are generated by external stimuli, and also by our own internal machinery. Little cells emitting light and our brain keeping the camera rolling long after bedtime. Sort of the cameraman and the audience at the same time. So too with the patterns of human behavior. Whether we're prompted by the external architectural designs that house and surround us as we go about our daily routines or the little red badges and the blinky buttons that are nudging us to act now in our digital lives, we humans remain inside of what we make. We're the progenitors of our own persuasion. But the breadth and the depth and the scale and the speed of the influence that the digital and analog tools we're creating is shifting dramatically. As is who or what, which as you'll hear later will inevitably bring us back to who, gets trained to prompt and persuade and change our behaviors on and offline. So enormous language models are rapidly advancing and they make it possible for machines to influence both our online behaviors and our offline behaviors across a range of interactions that impact our wellbeing. But what happens when we haven't opted in when we don't know what behaviors are being influenced and we can't actually track back to why we're doing the thing we're doing in the first place. So OpenAI's GPT-4 apparently actively deceived a human in a test to complete a task. And a recent Stanford study from the Institute for the Human-Centered Artificial Intelligence in 2023 noted that persuasive AI could be used for mass scale campaigns based on suspect information used to lobby, generate online comments, write peer-to-peer text messages, or even produce letters to editors of influential print media. Now, since there's no Hippocratic oath or ethical mandate for behavior designers yet, how can we stay aware of who and how and what new designs and actors are persuading us to behave well? And what design choices enable us to stay willing, aware, and active participants in our designed lives? Welcome. My name is Hildreth Englund. Welcome to Media Evolution. I'm the head of research and curation here. And today, we are going to be talking about exploring the futures of designing tools and contexts to influence the patterns of human behavior for our own and for the planet, for our non-human kin, for our well-being. We have four amazing speakers today that are gonna help us unpack some of the challenges but and the opportunities in designing for thriving in our urban spaces, at work, in our institutions, and in our online lives. And we'll dig into the ethics of persuasion, which are not as dark or as light as some critics might suggest. So we have Eva Westermark joining us, a partner at Gale in Copenhagen, an urban planner, an architect who's gonna help us understand how place-based well-being can be equitable and sustainable for everyone, dig into the middle spaces of behavior, the literal spaces. We have Katarina Schell, who's a chief psychologist at AbleMind, who's gonna discuss designing technology tools to help employees be well in their workplace. We have Michael Rosenberg here joining us as senior product lead, us two, who's gonna discuss how designing persuasive products and services, including those that use AI, can be done safely and ethically. We also have Salah Westerstra and joining us, the founder and AI ethicist at Harmless. We'll discuss building more ethical, values-driven AI solutions for business and society. So much has happened with AI and large language models and the ethics of how and whether we design tools to influence human behavior since we started this collaborative foresight cycle in November. So much so that we've been a little bit deliberate today in pivoting the conversation slightly towards AI as a tool, one of the many, many, many that can influence how we behave across a range of interactions. And whether prompted by intrinsic or extrinsic motivations, what we'll find today is human agency remains at the center of this conversation. Especially as we imagine the possible, potential, and preferable futures of behavior design for wellbeing. So we're right there at the top, tippy tippy top, of these circles. This is the final seminar in our process called Collaborative Foresight, and we've dedicated a cycle to the futures of behavior design for wellbeing. Thanks to the generous support from our partner at Sway Life. Through four immersive workshops with a core contributor group of folks, including some folks in this room, we've imagined and developed and revised a series of possible alternative and preferable futures for Southern Sweden in 2050. The broader community participated and contributed trends and signals and helped our core contributors explore the assumptions and the ethical implications of each future scenario. And it became clear very early on that it would be impossible to explore the futures of designing behaviors and the health and mobility and sustainability spaces, everything around our wellbeing without also contemplating the wellbeing for the planet and its non-human inhabitants. So today we're also releasing a book. I hope you got your copy. Did you get a copy? Can you show me? Yes. Hi. It's called The Patterns of Light and Dark. It features future scenarios envisioned by this core contributor group of designers and doctors and architects and UX researchers and city planners and academics and artists. And we have three commentaries included in the book from authors near and far, some of whom were also part of the collaborative foresight cycle. So here's some housekeeping. This is what we're gonna do today. We'll have a couple of speakers, then we'll have a Q and A. You can pepper them with your questions. Be sure to wait for a microphone before you get there so that everybody can hear you. There'll be mics running around. I'll do my best to moderate your curiosity. And then we'll stop and pause for a break. You can get more coffee or more water to hydrate yourself because today is a conversation about wellbeing. And then we'll come back and we'll have some more conversation. And then we'll finish up with some connections and more coffee or water, your choice.