 What kind of world do I want to live in? I think about this question a lot. For our generation and for specifically my group of people, which is refugees, the circumstances might dismantle any vision of the future that we have. You're trying to rebuild, you're trying to make a future for yourself, and then the climate-related disaster comes and you start again. It's not about how it's affecting you now, it's about how it's affecting you your entire life. The first step to understand is that we're all a part of it. None of us are going to be left out by the crisis. We're at a stage where if we don't act now, really there won't be very much left. There are generations that will never see certain things that we grew up seeing in real life. We have to start treating this like the emergency it is. To achieve the 17 sustainable development goals, we have to go from an intention to a serious commitment. Business leaders really need to rethink how they conduct their business and invest in creating systems that are climate-friendly. The action I would like to see is accountability. Structures being put in place where countries aren't just asked to do something, but they're kept accountable to the decisions that they make. There has to be that strong collaboration between government, between corporations, between youth activists to drive change forward. The world I would want to live in is a world where imagining the future is not a privilege. I want to live in a world where people do not give up on hope. Hope that a positive change is possible. The fact that you're listening today means that you are willing to make a change. My name is Quentin Palfrey. I'm the president of the International Digital Accountability Council, and I'm pleased to welcome you to today's session on well-being with precision consumption. We have a terrific panel for you here today. Really look forward to this exciting conversation. So, non-communicable diseases, also known as NCDs, such as heart disease or cancer, chronic respiratory disease and diabetes, are the leading cause of death worldwide and represent an emerging global health threat. Deaths from NCDs now exceed all communicable disease deaths combined and kill about 41 million people each year, equivalent to seven out of 10 deaths worldwide. Even worse, the COVID-19 pandemic demonstrates the massive threat that an unhealthy population can pose to modern societies. The good news is that advancements in technology, data and personalization now offer transformational potential to tailor products and solutions based on individual biological profiles to achieve sustained well-being. So imagine a world where every consumer, every patient has access to personalized health information to help manage his or her own nutrition, physical activity, sleep, stress, disease management, enabled by precision technologies. This powerful combination of personal data, artificial intelligence and the Internet of Things can really help us to tackle this serious challenge. But new frameworks are required to navigate some of the challenges that are related to personal health information and especially equity and data sharing. So in today's session, we'll explore the role that precision technologies can play in improving well-being and the operating and governance models that we need to navigate the challenges related to personal information, equity and data sharing. So we have a terrific panel here today and I'd like to take a moment to introduce some of my colleagues on the panel. So Ali Mastashari is the co-founder and CEO of Life Nome, which is a precision health AI company that leverages biological, physiological and behavioral data to hyper-personalize needs assessments and wellness interventions across a number of industries. Paige Mott's is the head of global sustainability at Dell Technology. And Dell is, of course, a global leader in this space. Paige is the head of corporate sustainability and is responsible for strategic vision and stakeholder engagement. Her work includes deep collaboration to advance sustainability programs, to foster innovation, to drive engagement with Intel Technologies, global enterprise and with key external partners. And I'm particularly pleased to be here today with Stefan Verhost, who's the chief research and development officer at the GovLab at NYU. He's been a pioneering leader in this space for many years. I had the opportunity and the pleasure of collaborating with him on some other projects. He's the co-founder and chief research and development officer at the GovLab. His research considers how advances in technology and science can be harnessed to create effective and collaborative forms of governance. So it's a real treat to be here with the three of you here today and later with Erica Alice Sandrini, who will introduce as part of the Firestarters. But let's begin with you, Paige. Dell's done some great work on the future of connected living. And I want to give you an opportunity to describe what you're seeing in terms of the advancements and the evolution of new capabilities like precision technologies and to help us understand how we're likely to see humans and machines interacting in the future in this space. Well, thank you, Quentin, and I think you said it well that the world was already headed in this direction, but I think that COVID-19 has forced a real change to the way we work, the way we interact, the way we approach key issues, especially given how it's further exposed concerns related to vulnerable populations around the globe. And if there's one silver lining, we've seen is the emergence of a connected intelligent world, things like 5G, advanced computer science elements like artificial intelligence, machine learning, virtual and extended reality. All of these and many more technologies are helping us do more than we previously imagined. I think the goal here is to really leverage emerging technologies and rapidly accelerate that human-machine partnership. Primarily to drive higher levels of productivity, but also that work-life harmony, things like taking over repetitive tasks, allowing for modeling to occur in a more rapid way, practicing before doing, delivering much more at the exact point of need to solve the exact right problem. But in order to achieve this future, humans must lead also with caution. We have to think about things like data privacy, algorithmic inequality, and all of those considerations to ensure that these technologies are operating truly in service of all humans and not some humans. And so that's the place where that human-led technology-enabled conversation needs to come to play. But the future is connected, living, it promises extraordinary things. Data, huge advances in processing power, software, all of these things are really converging to create a new chapter in this conversation around technology-led human progress. And precision technologies are great examples of that convergence. Thank you so much, Paige. Ali, I want to bring you in here. As a technology startup that's using AI and machine learning to improve health, what are the roles that precision technology specifically can play to improve well-being? What are you seeing? Thank you so much for the time. And building on one page is saying there's the issue of technology, there's the issue of inclusivity, and the two go very much hand-in-hand. But the main really advances are in the fact that we now have DNA-based technologies that allow us to understand how a person might process different nutrients, react to different kind of skeletal responses, or even be able to predict the onset of chronic diseases way before they happen with 75% to 80% accuracy if we have enough data. So we have genomics, we have microbiomes, we have advances in wearable data. There's a lot of data right now that wasn't available 10 years ago, and that really drives the ability to pinpoint not just the needs of an individual, but also what interventions, what products and what services would contribute to the individual's health and well-being. One of the main things that has emerged in recent years is people's distrust of companies to keep their data private. So the emergence of other technologies, such as blockchain, have allowed us with the ability to kind of think about new ways of destroying data. And I think Stefan is going to talk about also governance models that will then combine with these technological advances, allow us to create the trust that's necessary for individuals to be able to kind of get access to this data and get access to the amazing benefits that precision health has on preventative well-being. So that's a wonderful transition. It's just perfect to the transition for Stefan. Stefan, I'd love to get into some of those governance models. What do we need to do here to promote responsible data stewardship, to put in guardrails in terms of governance and oversight, to make it possible for us to unlock the transformational possibilities of these new technologies while retaining a sense of trust and making sure that we're facilitating sort of responsible data use? Yes. Thanks so much for having me. And as Ariel was already referring to, we do need to not only innovate in how we access data for precision insights, but also we need to innovate how we govern the way we go about this. And I think it's important to take a step back and reflect on what is needed from a data perspective in order to actually deliver insights that are customized and that are, to a large extent, precision. And obviously, A, it needs an aggregation of data, meaning one data point won't give you the insight that is needed. Quite often we need big data and small data together in order to provide the insight that is needed to be precise and to be customized. And that quite often means that we need to have access to data that one doesn't have. And so that means automatically that we need to set up new partnership models, which we call data collaboratives. And those data collaboratives need to be governed in a way that indeed provides trust on how the different parties and partners will either provide access or have access to the data. So that's one challenge where we need to have new organizational governance structures. And that could be, for instance, an oversight board, it could be an ethics board, could also be a trustee board for that matter that really tries to oversee how the parties and the partners that provide and access data are acting. The second very briefly area where we need governance innovation is in the reuse of data, because we not only need access of data that others might have, we also need to reuse the data that was collected for one purpose for another purpose. And that turns out to be, from a governance point of view, a challenge because most areas in data governance start from the purpose specification principle, i.e. you can only use the data for the purpose for which it was collected. And surprise, surprise, when we talk about precision, we quite often talk about reusing the data. And that's where we need, from my point of view, a new way of going about this, which we call data stewardship. And specifically, we need a new profession, i.e. achieve data stewards, that can really navigate how do we reuse the data in a responsible way, but also in a fitful purpose way so that we don't start to reuse data for all purposes, but only for the purpose for which we ultimately have specified as a reuse purpose. And that requires a more sophisticated way of thinking about governance that requires, from my point of view, a new profession, which we call data stewards, Quinton. That's a really great way of framing it. So I had the opportunity, the honor, to work in the White House under Barack Obama. And one of the initiatives that we were developing about a decade ago was the blue button initiative. And part of the goal of that was to give patients access to their own health information and try to de-silo some of the pockets of information related to an individual patient in order to empower them to work with their providers and across the lifespan to be able to manage some of the data here. And it does seem like there are governance interventions that are required to unlock the potential of these data and to allow for the data sets to match. At the same time, there are other threats. There's the threat of data misuse and there's the threat that the fear of data misuse of violations of privacy, of cybersecurity and of general lack of coordination can undermine the willingness of patients and providers to participate in these systems. So governance is needed, I think, to make the affirmative data sharing possible and also to protect it. So, Paige, I want to bring you back in here. So tell me how you think about what we can do to architect some of these kinds of solutions and how we can overcome some of the challenges that Stefan's talking about. Well, I think, you know, developing a solution with the real problem in mind is half the battle, right? And sometimes all of us get into a perspective where we want to lead with the technology and figure out a way to retrofit that technology to the problem. And what we've found, and a lot of folks have found, is when you really, really understand the solution, you've really engaged on the front lines, I'm sorry, not the solution, the problem, you really are going to do a much better job of thinking through all of these pitfalls and orchestrating a solution that is much more effective. You know, and it has a lot to do with partnerships, right? Because I think that when you're trying to create a partnership from a silo, you're not going to be as inclusive in the outcome. An example of that, you know, in the public-private spaces, you know, similar to what my fellow panelists are identifying, you know, Dell has worked with both the local, state, and national level governments in India, along with the philanthropic wing of Tata, called Tata Trust, to create something called digital life care. It's precision-led technology around healthcare, specifically for non-communicable diseases that was spoken about earlier. And really understanding where the vulnerable populations are, really understanding the problems that healthcare workers and doctors and nurses were experiencing, really understanding how it's not just about screening for non-communicable diseases, but the entire continuum of care and what the hurdles are through each aspect and each phase of that, through screening, referrals to specialists, diagnosis, and creation of a medical, you know, intervention plan, and then management of that plan. All of that is for naught, if it's hard for the individual, the patient, to manage this, you know, their diabetes or hypertension long-term. So, you know, I think one way in which you can overcome some of these pitfalls is to really, really map out who are the most vulnerable populations, what are the most salient concerns relative to the problem statement, and work in a public, private, multiple stakeholder analysis to get to the right solution. And what we found is when we did that, this has been going since 2013, now tens of millions of people in India have signed up for this and with this partnership through the broader healthcare services offered in India, it allows for many more people to be able to leverage this capability. People that are not near the city center, but out in rural villages that otherwise may not have had access to healthcare. But had we not gone and visited those villages and understood where the concerns were, it would be very hard to create a solution that now is at scale. Thank you, Paige. Ali, I want to bring you in on something that Paige mentioned a moment ago, which is the question of algorithmic discrimination, how we make sure that as we scale these technologies, we do so in a way that's equitable, in a way that doesn't perpetuate and deepen some of the civil rights challenges that we see in this data space. I just sort of wonder how you think about that. This is a very tough and challenging issue. One of the issues that we actually have is for health data, we don't have representation. So basically a lot of the health data we have is from white males within the population. A lot of health policies are based on that. So the first point of that is can we think about it that an AI algorithm only learns based on the data available? And if we don't have data representations from the populations that are actually at risk and have a lot more vulnerability within our data sets, it's much harder to create policies or structures or science that somehow takes into consideration their needs. So the first thing is to make the availability of the basic data that is required for personalization. That data needs to be representative of every single population that's within our societies. And it needs to include data, not just on biological data or physiological data, but data on equity and access and understanding all the different societal issues that individuals are confronted with. So once we have a data sets or data sets that are unified, that are representative of the population and have measured many, many different aspects of our being, we can then train the AIs on something that's more representative of something that's more equitable. Otherwise, we will end up with biases, no doubts, if we kind of stick to the populations that we have. To give you an example, there's a project called All of Us basically by the U.S. government to collect data on genomics to kind of akin to what we have with the UK Biobank in the United Kingdom. And that project is striving to get minority participation and it's very hard to get that, right? Because people don't trust to give their data to the government. So there's so many different aspects that need to be overcome for that equitable access that it's a huge challenge. It's a very, very important challenge. Thank you, Ali. I mean, we've talked a lot about trust here today. It's one of these things that sort of keeps coming up. And I wanted to, Stefan, sort of bring you back in to ask for you to sort of expand a little bit on how you think about data stewardship, accountability, oversight. What guardrails can we put in place to ensure that these data are being handled responsibly, particularly as we scale up? The organization I run, the International Digital Accountability Council sort of works as a watchdog to identify some of these risks and challenges and try to empower patients and other consumers with some information about how their data are being used. But at scale, sort of how do we think about creating policy solutions that enhance trust as we go further and further into this sort of data-rich world? Yeah, not an easy question, but obviously I think every day, frankly, we are getting smarter about how to do this because we're actually learning from practice as well. And I think there are a few takeaways from at least recent efforts with regard to your question on guardrails. And I think the first one actually goes back to Quentin, when you were talking about the blue button, where you at least identified the need to prevent misuse but also to prevent misuses. And I think being crystal clear on what do you try to prevent but also what do you try to do with the data actually instills trust and it also shows that access to data is not just for any purpose, but it is for a well-defined purpose that can benefit people. That's the first kind of real important insight is that when you define and design guardrails, you also have to do so in a way that actually doesn't generate opportunity costs for doing good as well. And I think that's a key takeaway. The second one is really about thinking about risk across the data lifecycle, meaning clearly we are talking about access to data, but risks exist at the collection stage. It exists at the processing stage. It clearly exists at the access stage and sharing stage. And then obviously, I think Ali was already referring to the risks that exist when you actually start analyzing some of the data that might not be representative. But then also there are risks in how you actually start using the data. And so having a data lifecycle approach to the risks that exist is an important one because too often we get obsessed quite often about one element of the data lifecycle and ignore the full data lifecycle risk equation. And so having a good risk assessment is a first start across the data lifecycle. And that also means, by the way, that you need to understand why was the data collected in the first place and who determined because coming back to some of the discussions about equity and inclusion who determined actually why the data was collected in the first place and what were the questions and who defines the questions for which the data was collected isn't itself already quite often an issue of equity because too often the ones that can benefit are not part of actually defining the question for which the data was collected. And I think starting there would already instill trust on then how we will use the data later on because they were part of actually defining how the data was collected and why it was collected in the first place. And then the third thing, and then I will stop Quentin because this is obviously a multi-layered question that you have but I think the third thing that we need to take into account is how do we get a social license for reason the data once it has been collected because I already mentioned quite often we are using data that was collected for one purpose for another purpose which quite often means that we are not really informed by consent and not really informed by at least an understanding among the data subjects on how the data will be reused and that requires an additional step from my point of view which is a step towards acquiring social license and here at GovLab for instance we have created a citizen's assembly which we call the data assembly to really start deliberating about what is your expectation with regard to reuse of the data and how can we inform those re-users by actually people's expectation but also co-creating the guardrails as you described in a way that actually instill trust because they were part of the process in determining the conditions under which data was accessed and so these are just a few ideas Quinton there's a lot more of course to be done here but I think we're getting smarter about how to do this but that doesn't mean that we have all the solutions in hand yet Thank you so much Stefan so we're approaching the end of our session I want to give a lightning round opportunity for each of the three of you just to say one thing we need to get right as we seek to scale this up I'll give you each a last word and maybe we'll start with you Paige I think I'll build on what Stefan said but taking it broader on technology overall not just data the people that are designing these programs need to be more diverse and inclusive themselves that will make all the outcomes better Representation matters absolutely Ali, your last intervention here for the time being Particular as it regards to the private sector I think the shift from the traditional mindset of competition to one of collaboration for human health is extremely important so the fact that human health unfortunately does not understand market competition and market share in order to achieve it we really need to collaborate together there's still profit to be made to that collaboration this idea that competition is the only way to profit is actually a kind of a fallacious idea and actually data sharing can enhance the pie can enhance the things that can be done through collaboration so from my takeaway collaboration as a shift in mindset with regards to private enterprises Wonderful, thank you and Stefan last word from you what do we need to get right here Well I will build upon Ali that data collaboration is the name of the game from my point of view but it is not easy and we really need to work towards having data collaboration become more systematic going beyond the pilot sustainable really thinking about what's the cost and benefit structure and more importantly responsible and if we would have systematic, sustainable and responsible data collaboration then we would actually be able to scale the efforts that we discussed today because these are the three ingredients that quite often are missing Thank you, thank you so that's it for our web stream I want to thank all three of you Paige, Stefan and Ali you've given us some great perspectives I want to thank you all for joining and also more importantly for all you're doing to leverage technology and personalize data to advance well-being this is an important piece of the work that the forum is doing here and so for our forum partners please stay on and we'll have some further discussion with you but thank you all for this great discussion