 I direct a brain technology group at the MIT Media Lab. We work on ways to really map the nanostructure of the brain and to try to help repair brain disorders like epilepsy and Parkinson's disease. Over the last decade or so, what we've realized is that as we work at this really unknown frontier where there is no guideline on how to build these tools, there is no textbook, we've been working on a number of sort of methodologies for thinking about how to tackle true unknown problems. Recently, we started teaching a class called Engineering Revolutions, which is a collaboration between the MIT Business School and the MIT School of Science. And what we're trying to do is to figure out how do we train our students to think about how to tackle these intractable problems? Are there methodologies? Are there strategies that could be brought to bear? Or is it just always going to be unpredictable and serendipitous? Can we be more conscious and deliberate about these problems that we want to solve? So one way to look at this and one of the prompts that we often start the class with is to talk about the 20th century. And during the 20th century, from a glance, of course, it looks like there were lots of really great technological innovations, from the airplane to the transistor to the laser. Of course, the microchip and all that came from it, landing on the moon and the internet. Looks like a constant stream of innovations that are not only very novel and innovative but changed everyday life for the better in a way that I think we all aspire to. And one question I think that a lot of us discussing here at the forum are what are these 21st century problems? At a glance, they look a lot tougher. Climate change, brain disorders that affect over a billion people, intractable cancers which are difficult to confront, clean energy. These problems seem to be very difficult and a question that I think is good to ask is, well, why? Why do they look difficult? There are a couple possibilities. One possibility is that all problems look difficult when you're in the middle of them and that's quite possibly true. But there's another possibility which is that some of these problems are fundamentally different in some way than problems that we tried to tackle in the past and succeeded at. In my own field, brain disorders, taking a solution out of the laboratory and into the marketplace where it can help people, especially if it's a drug, can take a long time and be very expensive. I think the numbers for justific one category, pharmaceuticals to treat brain disorders cost almost a billion dollars, take over nine years to get into humans and approved and the failure rate is over 90%. So with numbers like that, it's perhaps not surprising that many companies and individuals were sort of shying away from tackling these very, very difficult problems. So what is different? Let's try to investigate that hypothesis. Well, if you look at 20th century technologies that we've been talking about like the laser, like the microchip, like the moon landing and so forth, you could argue they were built on solid scientific foundations. If you know the laws of physics, mechanics, electrodynamics, quantum mechanics, you can actually know a lot about the fundamental barriers of what's possible. There are actually well described laws. You can fit the laws of physics into less than one page if you try. And the risk of a given project, of course, is always real, but it's not because of a fundamental barrier in the nature of existence itself. I mean, there's always, and we like to talk about different kinds of risk. There's always execution risk. Maybe the project is not properly carried out. There could be market risk. Maybe the customers or the competition aren't what you expected. But in the end, the scientific risk, if you will, the fundamental risk is limited from the fact that these are known and highly quantitative and well-delimited sort of laws of nature. So, of course, some of the problems we're talking about, like climate, like medicine, like education, there are not concise and elegant short lists of the principles. You know, we are always discovering new building blocks and new interactions in these complex systems. So, how can we reduce science risk? That's sort of the topic for today. One very popular concept right now, and we've heard a bit about this at this conference with the Cancer Moonshot Announcement, is this idea of, you know, let's do a moonshot. Let's set a huge, ambitious goal. Let's galvanize interest, maybe by coordinating teams, maybe by announcing a big prize. And basically, what happens is you're building a portfolio of different attacks on the problem. And the hope is that some of those are going to win. Now, moonshots work, as we'll talk about in a second, though, only if they're very well-posed. If you try to do a moonshot and the fundamental science risk is there, you could end up wasting resources. And so, one really needs to, if you want to pursue a moonshot, to either have a very solid, fundamental set of scientific building blocks to build from, or to put that acquisition of those fundamental rules as part of the moonshot. And sometimes that's done, and sometimes it isn't. So, of course, one recent success, which inspired a lot of moonshot thinking, was the XPRIZE in 2004. They awarded a prize for the first private group to get into space without government financing or government collaboration. And a team of industrialists and scientists were able to get to the 100 kilometer mark, which is the definition of the boundary of space. And many of us, myself included, found this to be very inspiring. However, and this is not meaning to trivialize or achieve in any way, the science risk is still somewhat low, right? We know how the laws of physics work, we know how to get into space, we know what to expect. And so, it's a tremendous technological and engineering feat, but it was a well-posed moonshot, if you will, because the fundamentals are well understood and it makes sense to pursue this kind of portfolio approach. Now, to sort of make an argument that this could have been ill-posed, what if they tried to launch the XPRIZE in the year 1700? So, back then, of course, the laws of physics still being worked out, right? Calculus and mathematics, you know, coming along, maybe have some ways to go. Aerodynamics, not even really beginning, you could argue. People in the year 1700, confronted with a moonshot, might have tried to launch hot air balloons, they might have ended up wasting a lot of resources, and you could argue that all the resources on the planet might not have gotten you into outer space in the year 1700 because the fundamental understandings are just so remote from what's needed to confront the real nature of reality. Now, this is not just a quaint example. You can find current examples all the time. I'll give you one example. My colleague Andrew Lowe, a finance professor at MIT, last year published a study analyzing the riskiness of trying to treat Alzheimer's disease. And of course, Alzheimer's is a major problem, one that in many of our countries probably will only get worse in the years to come. Being a neurodegenerative disease, it's very, very hard to confront. We don't even understand all the mechanisms of Alzheimer's disease. And Andrew did an analysis. Given current risk, the portfolio of research that you'd have to launch in order to have a good chance of a viable therapy is so large, I think he estimated $30 billion to be necessary in this initial fund. The private sector is unlikely, he concluded, to actually result in effective Alzheimer's therapies anytime soon. So that's an example where there's real science risk. We really need to understand more of what's happening. And Alzheimer's, being a slowly developing disease, if it happens over many years, is very hard to study because of this slow and difficult nature of understanding. So moonshots need a science risk de-risking plan to take place. So let's talk about a couple of ways to reduce science risk. And again, I think many of these concepts can apply to some of the things we've been talking about here at the forum, like the cancer moonshot and so forth. One idea is to take a cue from physics. In physics, there are a small number of building blocks, electrons and protons and so forth, and a small number of interactions, electrostatic interactions, gravity and so forth. You can make the list and you can build from them because there's a small enough list that we can hold them in our head and be designers of inventions. In many of these 21st century problems though, there's an incredible number of building blocks and a huge number of interactions. So just to pick one concrete example from my own area, if you look at the human genome, there's 20 to 30,000 genes, let's say, and they interact in very complex ways. Suppose you want to treat a disease and you're gonna try to screen through all these different sets of genes, perturbing them, let's say, three at a time. Even if you could try a million combinations a week, trying to perturb three genes in the genome at a time would take over 100,000 years to go through the entire set of triple combinations. And that's just triples. What if four genes are interacting in a disease? What if five? We don't know for most diseases how many genes are really interacting because the interactions are very, very difficult to map. So to take a step back, one issue is that in a complex system with lots of building blocks and lots of interactions, the vast majority of hypotheses that we make are gonna be wrong. It's just a fact of the statistics here. If you have maps though of the building blocks and interactions, which is one way of looking at the fundamental understanding question, then one can try to reduce the risk. And so that's something that is difficult to do though because when you're mapping, you often have to map things that you didn't even know are important. Just as two quick examples, in recent years has been a huge burst of excitement about how microbes in our gut might influence our brain or our immune system. And I can guarantee if you talked to a neuroscientist several years ago, the vast majority of them weren't even thinking about microbes in the gut. Or as another example, not to pick on my own field too much, the interactions between the immune system and development and aging and cancer and all these interactions between us and of course us in the outside world how social interactions shape health and so forth. These kinds of linkages are being explored now but the vast majority of these were not even topics of study as recently as several years ago. So mapping has to be more comprehensive than you think it needs to be. Big data is a buzzword nowadays but I would argue that for a lot of these complex problems you need far bigger data than you even think you need to know. One phrase that I like to use is what I call the illusion of reductionism. Many of us in science will try to pick certain things to study a small part of the system because it's comprehensible but it's sort of only an illusion of reductionism because we might be ignoring other parts of the system that are just as important if not more because of the size of the system and the interactions. And so one thing that I think is powerful about trying to really systematically map things is to help us avoid the illusion of reductionism. We want to really find out what is important even if it means going outside our comfort zone even if it means entering realms where the human mind struggles to comprehend all of it and that might mean we need better software tools or collaborative models but it doesn't mean that we should give up on trying to confront reality on its own terms. So how can we actually build such mapping tools? Building mapping tools that go beyond the information we think are important that's actually really difficult and I'll just give you one example of a technology that our group is working on to try to confront some of these issues. So in medicine of course, one big quest is to understand how cells in the body change in disease. If you can do that, you could actually try to pinpoint some of those changes and use them as therapeutic targets. Can you kill off a cancer by aiming a dart at a specific target? Can you treat a brain disorder such as Alzheimer's by blocking a degenerative pathway? And of course tools like microscopes have been invaluable over the years for trying to look at structures in cells. But microscopes struggle when it comes to imaging large objects like a tumor, like a brain circuit and trying to map the pathology within. So recently we started to take some inspiration from the kinds of chemicals that you find actually in baby diapers. In baby diapers, it contains a polymer that starts up very compact, very dense, and then when you add water, which babies do in their own idiosyncratic way, the polymer will swell and become larger. And so what we decided to do was to try to take tissues like tumor biopsies, like brain samples, and to do this process right inside the cells. So here for example is a specimen of brain tissue and then we infused it with these chemicals and made the polymer and then added water to swell it and this is that brain tissue a few hours later. We've swollen it a hundred fold in volume and because these polymers are so tiny, you can actually move the molecules apart from each other to the point where you can identify them and map them. So what we're finding now is that this is gonna be very useful for things. For example, we can start to map brain circuits and how they change in degenerative conditions like Alzheimer's disease. We're starting to look at the structure of the genome. The genome many of us think of as a line, but in reality it's coiled up in a very compact 3D complex. If we can map how it looks in three dimensions and how one gene might interact with another very, very far apart, that might help us really understand the genome sequence not as a line, but as a 3D sequence if you will. And we're looking at a wide variety of tumors trying to map the molecular changes in them, hoping that we can find distinguishing factors that make one tumor cell change that help us target them selectively without of course harming normal and healthy cells. Now it's very important to point out that mapping is not everything. Mapping will suggest hypotheses, but they have to be tested for causality through perturbation and so forth. I'm not even trying to pretend that maps are the entire solution, but maps are a way of freeing ourselves from biases and suggesting novel hypotheses that get us out of our local minimum and maybe enlighten us about other kinds of interactions. To confront the thinking, it's also the kind of situation that we might wanna have a way of mapping thoughts as well, our ideas. So the second strategy that I wanna talk about today is about how can you have mapping of ideas? And I take great inspiration from a astrophysicist, Fritz Zwicky, who over 100 years ago came up with many ideas about the nature of matter in the universe, which now are being found to be correct. How the heck did he do that? How did he think up these concepts long before the current technologies that we have appearing into the deep recesses of space? He came up with a strategy that he called morphological analysis, which is a way of trying to map all the ideas that could be possible in a systematic way to make sure you're not missing any. Maybe this is best time by an example. Let's try to think of every possible system that can generate energy. Well, we could start by making a list. There's gasoline, there's a sun, but how do you know you have the complete list? And so what Fritz would tell us to do is to take the space of all ideas and to split it into subsets. So the subsets sum up to equal all the possibilities. We might, for example, split it into renewable and non-renewable. So we made a little bit of progress, but the two categories that we come up with still tile the space of possibility. And then of course there's a bit of a game here. Can you make the most interesting splits? We can take renewable and split it into solar and non-solar. And already things are getting interesting, right? What's a non-solar renewable energy? And so in my group, we actually practiced these kinds of exercises and we started thinking, well, you know, as the earth turns, the moon tugs on the oceans and causes tides. And the tides, as they go in and out of harbors and bays and so forth, maybe you could capture that energy. And so the nice thing about the Zwicky method is that it forces you to kind of leave your comfort zone and it forces you to think of new ideas that you might not have originally. And I like to call this method the tiling tree method because each row tiles the space of possibility, but it looks like a tree. And the very tips of these tree branches are actual potential projects, things that you could do or try or do a pilot study for. The third and final example I wanna talk about is about engineering hybrid institutions. So when you're tackling these very complex problems and you have to leave your comfort zone and bring in ideas that might be foreign to your own skills and instincts, one strategy of course is to try to avoid the limit of being a fixed set of people. And also to work with people who have different temporal and goal constraints. So universities, such as those on the left of course, have certain constraints and certain advantages. They can look at long time horizons, but they don't scale very well. And for-profit companies, this is just from the forum website, of course can scale very well. They can go to very, very large deployable size scales, but maybe you're less open than academic settings and maybe less able to collaborate in these styles. And then finally non-profits, which are getting to be of course quite interesting, they can scale better than academics can in some ways and maybe can have different time horizons than the other two categories. So one thing we're trying to do at the MIT Media Lab actually where I'm a professor is to think about new kinds of hybrid institutional designs. Are there ways to freely collaborate between startup companies and for-profit large scale companies and non-profits to try to make sure that we never sort of run out of the skill set or the approaches that we need to bring to the table. I would like to think that at the Media Lab we're training people to be what I call collaboration architects. These are people who have anti-disciplinary tendencies, they've trained in multiple fields but are not feeling confined by them. They have the social skills to try to connect people across different boundaries and different incentive structures such as those that we talked about on the previous slide. And also, they have the ability to take a step back and connect dots and find serendipitous connections which I think is very important. What we found just through lots of projects within our group and in our collaborative network including robotic surgical devices, nanofabricated probes, ways of doing tissue engineering and so forth is that if you do sort of take a step back and get a good grounding on certain kinds of underlying knowledge, very often these projects can move quite fast because once you have a plan and you see the principles it's sometimes, not always, straightforward to go for the goal. And so going forward, I would like to encourage all of us to think about how do we sort of confront these kinds of difficult problems in our own areas and are there ways to seek out opportunities that are lower risk and allow us to de-risk some of these moonshots so that we can actually confront major problems in ways that are effective and efficient. Thank you very much for your time.