 to our forum today on collective intelligence. If you look on the screen, you'll see that this is a topic that the forum and MIT has been preoccupied by for some time. We ran an event in 2002 in which the populist cyber visionary Howard Rheingold spoke about what he called smart mobs. And of course, as you know, probably it's a title of a book that he wrote. And clearly, he was engaging in a discourse about our topic today. And I'm sure, as all of you know, it's been a kind of ongoing preoccupation for people in technology and for some time. My primary task today is as a moderator and an introducer. But I will try to keep the conversation flowing, although I think our speakers have no problem with that. And as always, the first part of our presentation will involve a discussion amongst the panelists. And then the second half, at least the second half, if not a little bit more of our time, will be devoted to questions and answers with the audience. I hope you're already prepared to ask compelling and serious questions. And it is almost always the case that those question and answer sessions are the liveliest. And in many cases, the most interesting parts of our forums. And it is often somewhat embarrassing to us to have people speak from the audience in ways that indicate to us that our audience contains people at least as expert as our panelists. But that's one of the reasons the communications forum, I think, has been successful over the years. And I want to say one other thing before I introduce our panelists, which is a simple reminder. This is partly directed at the panelists, but they know this message. But also at the audience, one of the deep missions of the forum is to try to speak to what we might think of as a literate citizenry. So that although the speakers who come to the communications forum are often very distinguished experts in specific fields, they don't always obey us. But we always ask them to speak in a common literate language, to avoid the jargon of specialized disciplines in so far as that's possible to do. And my hope is that not just our forum today, but future forums will aim to honor that idea of a discourse of literate citizens. It's actually more difficult for many of us in the high tech community to do that. Not to mention the business community, which has its own special language than one might think. But I think it's a mission worth embracing. And I hope that you and the audience, as well as our panelists, will try to do that. I think my most sensible course would be to introduce all three of our speakers at one time and then get on with the real work of the forum. We're very fortunate to have three very interesting speakers today. I'm going to move in the direction of the folks who are closest to me from this side of the table first. Thomas W. Malone is the Patrick J. McGovern Professor of Management at the MIT Sloan School of Management. He's the founder and the director of the MIT Center for Collective Intelligence and the author of a book that deals with collective intelligence in certain ways called The Future of Work. He's published over 75 articles, research papers, book chapters, and is an inventor with 11 patents. Next, Alex Sandy Pentland is the Toshiba Professor of Media, Arts, and Sciences at MIT and is the director of the Media Lab's Human Dynamics Research Program. He's the founder of more than a dozen companies, research organizations, and academic communities, and is currently the founder and faculty director of MIT's Legatum Center for Development and Entrepreneurship. Finally, Karim Lakhani is an assistant professor in the technology and operations management unit at the Harvard Business School, where he studies distributed innovation systems and the movement of innovative activity to the edges of organizations and into communities. Lakhani earned his PhD in management from MIT in 2006, so we're happy to have him return to his home ground. I'd like to begin by asking Tom Malone to kick the discourse off by giving us a kind of overview of the whole category of collective intelligence. And then from then we'll then proceed to a conversation in which each of the panelists will describe their own work and their own research operations concerned with collective intelligence, and we'll turn to certain other questions. Tom. Thank you. So actually I want to start by seconding something that David just said in his introductory remarks. Actually two things about involving or taking advantage of the intelligence of the audience and also the benefits of a literate citizenry. You didn't explicitly say it when you were talking about those things, but I think those two things are precisely what collective intelligence is about. That is, I think one of the goals of this field would be to help understand better how to take advantage of all the intelligence and knowledge of all the bodies in this room, those are the ones that happen to be sitting in the front of the room. And part of how that happens is by conversations among what you might call a literate citizenry. So to give a kind of overview of what is collective intelligence in the first place. Let me start with the definition I like best for cognitive intelligence, which is a very broad definition. I define collective intelligence as groups of individuals doing things collectively that seem intelligent. Now the use of the word seem there is like the definition of Supreme Court in the US Supreme Court's definition of pornography. That is, you know it and you see it. Hard to give a kind of formal, precise definition of either intelligence or pornography, but for the most part we have a general sense of what is and what isn't that. Now with that broad definition it's clear that collective intelligence has been around for a very long time. It's been around at least as long as humans have because human families, human countries, armies, countries, all sorts of groups of humans exhibit varying degrees of collective intelligence when they act together. It's also important to realize by the way that they sometimes exhibit collective stupidity as well. And I think one of the most important issues for this field to deal with is how to recognize the differences in the conditions that lead to collective intelligence versus collective stupidity. So in that sense, therefore, collective intelligence has been around for a long time, but in the last few years there have been some very interesting examples of a new kind of collective intelligence. One of my favorite examples of that is Google. Now by Google here I don't just mean Google's technology or even just Google the company. I mean the entire system of which Google is a part. The system that includes millions of people all over the world creating webpages, linking those webpages to each other, and the Google technology that harvests all that knowledge and gives us amazingly intelligent answers to the questions we type in to the Google search bar. That I think is an amazing example of collective intelligence, intelligence that doesn't depend just on computers or just on people, but on the combination of literally thousands of each. And it's an example of a kind of intelligence that never existed on our planet before. There was never anything like, in a certain sense, never anything as intelligent as the Google system I just described. Now at the opposite extreme of technology, I would say that Wikipedia is also an excellent example of collective intelligence. In the case of Wikipedia, what's amazing is not the technology itself. I mean the technology is nice, it's a kind of modern, very robust, wiki software tool. But what's really amazing I think about Wikipedia is not the technology, it's the organizational design. What I think is really amazing about Wikipedia is that they have invented an organizational structure, an organizational design that lets thousands of people all over the world collectively create a very large and very high quality intellectual product with almost no centralized control. And by the way, with almost all of those people being volunteers, they're not even being paid for what they're doing. To me, that is a truly amazing invention of an organizational design, not of a technology. So I think that Google and Wikipedia and other things like Innocentive and other things that you'll hear more about in a few minutes, I think those things are just the beginning. I think they're just the beginning of whole new classes of intelligent entities that we are going to see more and more of over the coming decades on our planet. And if we want to try to predict or understand or take advantage of all those things that are happening, I think we need to understand the possibilities at a much deeper level than we do so far. That's the goal of our Center for Collective Intelligence. In fact, the core research question that we pose for ourselves and that I think in a sense lies at the heart of this whole field is the following. How can people and computers be connected so that collectively, they act more intelligently than any person, group, or computer has ever done before? How can we connect them so that they're smarter than anything that's ever existed on the planet before? Now, in a way, that question is analogous to what you might think of as the core question in the field of artificial intelligence. The core question in the field of artificial intelligence, you could say, is how can we design computers that are as intelligent or maybe more intelligent than humans? That's a question that is certainly interesting, an important question, one to which huge amounts of resources have been devoted for decades, with some success, but I think with less success than many people in that field or outside of it would have hoped for or expected. What I would suggest today is that the question I've just outlined, the core question of collective intelligence, is an important and severely understudied question. It's not a replacement for the artificial intelligence question, because it's still useful to make computers as intelligent as we can make them, but I think we've spent way too little time and way too few resources trying to understand how to take advantage of both computers and people at the same time. If you're an artificial intelligence researcher and you write a program to solve a problem, and if a person helps to come up with a solution, that's kind of cheating. But why should that have to be cheating? Why shouldn't we take as much advantage as possible of the smart things humans can do, and at the same time take as much advantage of possible of the smart things computers can do, and put them together in ways that let us do things that are smarter than anything could ever do? So that, I think, is the core question of the whole field of collective intelligence. And I think now we're about to hear some more examples from the other two panelists, and I'll tell you some more from our work in the center as well about what that might be. Real-time management here. I can testify that Sandy was preparing his slides as we were sitting here. Are they ready? No. I finished just as you finished. But we have to get them on the screen. Real-time preparation. So I want to start off with something that is old fashioned in the world of collective intelligence, which is we as humans have, for millennia, gotten together in groups because groups are supposed to make us more intelligent, right? That's the ostensible motivation. But we also know that when you get people together in groups, you get lots of problems. You get polarization. You get group think. You get things like that. And we know that as we make those groups larger, things get worse. And it's at the point where any sort of larger organization bureaucracy is the butt of jokes. I mean, Dilbert is an endless source of amusement because of our apparently limitless ability to generate collective idiocy. And the question, the first question for collective intelligence might be, how can we avoid collective idiocy and just sort of break even? And if you've ever worked in large organizations, you know I'm not kidding. That'd be my goal. Yeah. So what I've been doing with my students, many of whom are here today, is developing something called Sensible Organizations, which is to try and move from essentially ad hoc way we've organized ourselves and come to decisions to something that's more based on data, science, mathematical modeling, and so forth. And I brought a bunch of papers that maybe we could give them out. I don't know, I hate to disrupt things. But let's do that. So the key thing in the modern organization is that we've been trying to manage things by looking at email and memos and org charts and things. But we all know that the most important communication is face-to-face. That's why you're here. That's why people fly long distances to conferences. There's something much more that happens face-to-face. And typically what that is is that's the delicate, sensitive, contentful information. And all that other stuff is just organizing those contentful discussions. So what's happened until just recently is that the stuff that really matters in every organization never makes it into any memo, never makes it into any digital store. It's essentially something that can't be looked at and can't be managed. The best we've been able to do is ask people survey questions and sociometric surveys. And sometimes you put a bunch of PhDs in the corner to sort of take notes on what happened. But essentially, most of what goes on is invisible, which means that we can't manage it, we can't organize it, we can't tell what works and what doesn't work. And the thing that's changed in the last couple of years is now you can measure things, this face-to-face, this missing contentful stuff, in real time, and you can potentially use this. So the particular approach we've taken, there are many ways to do it. You can actually do it with cell phones. You can do it with cameras. The way we've taken is develop a badge, it's a name badge, wear it. It doesn't record your voice, it doesn't record your words. It does pay attention to who you're talking to and when. And whether you're nodding your head when you're talking to them or not. So something about your body language, your tone of voice, who you're talking to, but nothing more. And it does this in some technical ways that I can get into if people are interested. But essentially what it's doing is it's recording that stuff that you get when you see somebody across a hall and you see people in an animated conversation. What does it mean to be animated? Well, using this sort of thing, you can measure that. And as an example of what we've done, we've gone into organizations. Like this is a German bank. This is something that we did with Tom's Group. It has six different, five different departments. And this is a group that plans ad campaigns for mortgage products in Germany. And on the bottom you see all of their email traffic during a day. And on the top you see the face to face conversations. Thicker the line, the more conversation there was. So here they're planning and the management is talking to development and all that good stuff. And then they deployed it and uh-oh, it didn't really work. And you see the customer service people in emergency conversation with the support people because it's all going to pieces. Familiar? Well, this is the dynamics of that organization during an entire month. So when the email goes away that Sunday and you see the pattern of communication between people and you see that the email doesn't match anything like the face to face pattern of communication. And this is the information flow within the organization. This is the first time people have ever been able to see all of the information flow within an organization. So what happens? Well, you can analyze this data and it turns out that the thing that matters is not email, it's not memos, nor is it the face to face stuff in a modern organization. It's the combination of the two. And the reason is that email and face to face trade off against each other if you're not nearby. But if you are nearby, you get something that's gossip essentially. People use email as gossip. So there's different modes of using electronic and face to face communication. And you know this, it's intuitive. We can actually measure it though now. And what we find is when we put the face to face stuff and the electronic stuff together, we can predict things with fairly good accuracy. So for instance, we can tell when someone's getting overloaded by looking at these two channels and we can do something about it. Like I said, uh-oh, you're in a bad situation. Let's put some more resources on it right now, not when you threaten to quit a month later. We can also measure the quality of group interaction in terms of their perceived productivity. It's automatically by looking at the pattern of interaction. So typically for this type of a group, which is a largely creative group, if you have something where there are people that are bottlenecks, you've got a problem. And so you can identify those and you can make suggestions in real time to change that network of information flow to make the quality of group interaction better. And that of course, results in better collective intelligence. You can bring this down in other ways too. So for instance, rather than think of it as sort of a big brother thing, you can think of this as a personal intelligence tool which collectively produces better results. So we take the badge, it has Bluetooth, it can talk to your phone. And what it can do is it can give you a sense of what's going on around you because people actually are often not aware of their social context. They get so caught up in the content that they lose track of the communication pattern that's happening around them. And so for instance, we're building little displays that use this badge and use the phone, as Tami Kim is working on, to be able to alert people to problems like groupthink or polarization or other sorts of group dynamic problems in real time with the hope of allowing the participants in a conversation to better manage themselves to a better result. That's a type of collective intelligence too. One of the interesting things here is it works for distance groups, potentially as well as it works for face to face groups. And the final thing I wanna end with is work that I'll just allude to and not tell you is that you can begin to use this technology to make more formal collective intelligence. You've all heard about prediction markets. This is where people essentially bet on results or have other ways of pooling their knowledge to make predictions. And that's an important type of collective intelligence. But a real problem with that are the problems of what I call gossip and rumor. Basically, if you get a lot of people who have a hot tip and they all bet the same way, you bias the market. If you have lots of rumor going around, you can bias the market. That's not intelligence. That's collective ignorance or collective mistakes. But you can tell what those sorts of things are happening by looking at the pattern of communication. You can tell when there are rumors happening. You can tell when there's gossip spreading and influencing decisions. If you have a sense of the total communication going on within your organization. And we've done some initial experiments that show that by paying attention to the pattern of communication, you can generate far more reliable collective intelligence. So that's what we're up to. Sandy, I have a quick question to clarify what you've been saying. It's hard for me to exactly understand this distinction you're making between pattern and content. That is to say, if you can study the patterns, how do you know whether it's gossip or serious information since the patterns don't have a content? To do the collective intelligence part? Well, to read your... You said that you can read your graphs or whatever, what is generated by looking at the information flow, by looking at the information flow passage. You can see when gossip is... And what I'm asking is how can you tell? So in cases like this, what you have to do is you have to have some history with people, okay? And you look at the pattern of decisions that people make, the pattern of bets they make over time, and you can correlate that with a pattern of communication. And what you find is you find certain patterns of communication correlate with having similar opinions. So this is familiar to us, is there's an in-group, they all sit around talking about something, and lo and behold, they all have very similar patterns of decision-making at the end because they've been talking about it with each other. Or there's a connector in the group, and the connector goes around and spreads a rumor. Well, suddenly all these people are voting the same way or betting the same way because they've been talked to by these people. So by looking at these patterns of communication, and comparing that to the independence of the decision-making of the individuals, you can figure out who's just going off the same meme with the same bit of information, and who has generally independent information that needs to be aggregated in a different way. Okay, can I just add a quick perspective to Sandy's project? The analogy that I like to think about the project Sandy just described is the analogy with a microscope. If you think about it back in 1600, whatever it was, when Lou and Hook invented the microscope, he was able to perceive things at a much more detailed level than it had ever been possible before. He could see germs and animals and all kinds of things, and that enabled whole new bodies of scientific research and scientific results to be developed in the fields of biology and chemistry and others. I think what Sandy is talking about is something equivalent to an organizational microscope. It lets us observe organizational behavior at a much more fine-grained level than was ever possible before, and therefore, we hope, opens up many more opportunities for scientific and managerial results. What he said. So not too long ago, I would sit on the other side and always be skeptical of the people here on this side. So I hope as we start our dialogue that you'll come back and challenge and push back to us as we sort of take on the expounding role here. I actually just stumbled into collective intelligence and sort of these distributed innovation systems as I call them. Really, as a puzzle, about 10 years ago, I was working at General Electric Medical Systems doing new product development and had a client in Montreal who refused to buy any of our systems. And it was like saying, you know, we've got everything that you're offering to us. We had this newfangled digital radiology systems designed, spent a lot of money acquiring companies and also developing it in-house. And they said, I don't need anything from you. We've done this all in our community and by ourselves. And I was like, no, we're GE. We bring good things to life. You know, this can't be the case. So they invited me to come and spend some time with them and I spent two weeks at them. And I was blown away. They had basically were about 18 months to two years ahead of the engineering schedule that GE had put out for itself. And this was back in 96, 97 timeline. And they'd done this basically by leveraging a community of other radiologists and other physicists who were interested in medical imaging and had used open source technology at that time. And they had basically leapfrogged whatever GE was gonna do. And that was a real puzzle for me because, you know, having done engineering and business as an undergrad and then worked for a while at a large company, this model of people self-organizing and solving tough problems that were the purview of a large centralized R&D lab just didn't make any sense to me. And I sort of forgot about it and came to MIT to do a master's in technology and policy. And I was actually doing work in bioinformatics. And again, what I noticed when I was here in 97, 98, this thing that people were using open source and Linux. And again, I just couldn't understand it because I was so used to spending big bucks on buying workstations from Sun. You know, and here we're using commodity hardware. People are rolling their own distributions and solving really tough computing challenges by working in these communities. And I said, well, so I shifted focus of my research away from bioinformatics into sort of this puzzle of communities and communities in software and started looking at open source communities back then to make sense of this. Like how is it that this model of organizing could compete against what has been well established and well studied of these large centralized R&D facilities, centralized engineering groups solving similar problems? And as, you know, I think just very lucky to sort of see that, you know, when I told my computer science friends that I was going to be studying open source, they were saying, this is just a fad. This is not going to go anywhere. Why are you wasting your time on this stuff? Right, they're going to get crushed. And I just had, what I'd seen both in demonstration at Montreal, but then also here, was that there was something different about what was going on and that this had legs and thankfully, you know, Mozilla took off and Linux took off and a bunch of other stuff took off. That, you know, my bets early on have somewhat paid off. And the first thing I was always interested in and everybody is when you sort of looking at this as an organizational form is like, why are people working for free, apparently? And what's going on that motivates these individuals to participate? And there's been a lot of work in this area. And what we've seen basically has been that in open source, which is, I think, almost a prototypical to some degree of a collective intelligence form where thousands of people collaborate and participate, often doing small chunks of work that is stitched together at a higher level. And what we've seen has been that there is a tremendous heterogeneity and motivation. So we have the RMS, sort of the Stalman-like free software foundation people who really believe in the community and the moral right of open software and free software. But then we have pragmatists who are trying to solve their own problem when they're working at Amazon or at Orbitz and when I just get this problem solved. And for them, it's a low cost activity to participate in the community and get a lot back. And so what we've seen is this tremendous heterogeneity and motivation. It's not just one reason why people are participating, but the fact that these communities can be agnostic about motivation. They don't care as long as you do that work makes them go a long way. And that actually provides us with some hints about where these things outside of the traditional organization take hold. So we need communities that have a participation architecture that can attract many different types of people. And then that also implies some degree of consensus around intellectual property. So what is IP? Who owns the property being produced in these settings? And again, in open source, we've got some clever hacks on ways to solve the property problem. But as we think about collective intelligence moving into other domains beyond software, the property issues, the IP issues, will become front and center for us to grapple with. And then again, we have these issues of governance. Who has authority in these settings? And Wikipedia actually provides a great example of governance because they have a very flat infrastructure, but each Wikipedia article is a war amongst the deletionists that are there and the inclusionists and everybody else. And these micro governance issues are being fought off every day in Wikipedia. And it's fascinating as a researcher to sort of see this and sort of see how they actually come to consensus. And often they don't. And again, that also again tells us how we need to think about in terms of scale. And finally, what we've seen is that the software world, the open source world has provided an inspiration to many other new forms of organizations emerging in different settings. So as Tom alluded to, there's a company in Andover actually called Innocentive. It's an offshoot of Eli Lilly. And their business is to take science problems outside of the R&D labs of firms, Fortune 500 firms, and then broadcast them to the entire world. So they'll offer 30K if you can solve this problem. And in my studies of how this system works, what we've seen is that people that are successful at solving these problems are often say, and you have good statistical significance that shows that they say that the problem they solved was not in their field of expertise. So they're offering bridging knowledge domains. So a physicist solving a chemistry problem because they break the frame, they look at it very differently. But the solutions they use are in their back pocket. So they take stuff that they already know they're experts at and apply to a different setting. And again, we've seen the same kind of behavior back in open source where there's a tremendous asymmetry in costs and benefits of participation where the programmers that are participating, I was a terrible programmer as an undergrad. That's why I got out of that business. But there are people that can write code, beautiful code overnight, which might take a team three months to recreate. And by enabling these people to connect with each other and letting them solve the problems, we can have this tremendous asymmetry in costs and benefits. We sort of see again, the person solving it in a center problem typically spends about two weeks worth of effort. And the problems have been stuck inside of these labs for two years with major teams trying to solve these things. So again, we're trying to, one of the hopes of collective intelligence is to take these distributed and sticky pockets of knowledge that exists in the world and find ways to aggregate them for us. And one of my favorite examples these days is this a T-shirt company called Threadless. And Threadless is an amazing business. I wish I had thought about it. It's basically an online T-shirt competition. They have a community of about half a million people that submit T-shirt designs at the rate of about 800 a week. The community votes on those designs and love it, hate it, one to five. The community gives a demand signal, says I'd buy it, right? And these guys come around, they pick the best scoring designs and produce about 2,500 a week of these designs and they sell out. They're on trajectory, they just now sell a million and a half T-shirts, $20 a pop, you can do the math. They've received in the last five years 133,000 designs from 41,000 people and there have been 80 million scoring events. So you can imagine again how they've been able to completely change the notion of how a company and organization should work. We've thought that innovation and marketing and sales forecasting were per views of the organization that belong inside the walls of the company. We spent a lot of money hiring the smart engineers to work for us. And at least in this triple example of T-shirts, they've sort of turned that around and said we can do much of that work in the community. Thank you. Now, I guess our last broad topic before we turn the discussion over to the audience and I wanna encourage folks to interrupt each other or at least to add on. Do you wanna mention a couple of projects? I do. Our last broad problem will be to talk about certain limitations that we see to the, at least to the utopian discourse that surrounds collective intelligence. But let's begin with Tom giving some further examples for the work that his shop is doing. Just really quickly, there are a bunch of projects currently underway in the Center for Collective Intelligence but I wanna talk about two of them very briefly and then others maybe we'll come up later in the conversation. The first is a project we call Collective Prediction. This is a project that involves Sandy and Drajan Prelak and Josh Tinnemom and Tommy Pojo from different parts of MIT. The idea is to come up with better ways of having groups of people predict what's going to happen. So Sandy mentioned the idea of prediction markets which is already a very interesting and often very effective way of predicting what's gonna happen by letting people buy and sell predictions of things like outcomes of presidential elections or sales of products or product release dates, et cetera. Those often do better than other methods like traditional market research or polls or whatever. We wanna try to generalize that to a much broader set of things that people or computers can do to help make predictions for instance, in such markets. Like if you're trying to predict whether a patient has skin cancer, you might wanna have just someone evaluate the picture of the mole on their skin. That's not a prediction market itself but it's a very useful kind of service to make that particular clinical prediction. So we wanna create a bigger infrastructure in which all those things can happen and we wanna have agents participating in that, computational agents. So in many cases, it's possible for algorithms, even very simple statistical techniques to do a better job of predicting than human experts do. Many kinds of projections of sales or even successive students in graduate school. It turns out that simple linear regressions can often predict those things better than experts reading complete statements and letters of reference, et cetera. So what we wanna do is have agents making automatic predictions in these prediction markets, thus making the markets much more liquid and much more useful for people. But, and in the cases where the agents are doing a good job of predicting, there's no reason for the humans to even intervene. But in a situation where a human thinks the agent is leaving out some important factor, then the human is motivated to participate in that market and if they're right they'll make money from doing so. And so that's, we think a very natural way of not having to decide in advance what people will do and what computers will do but letting the division of labor emerge and adjust over time because you've got the right kind of incentives established. So that's the first project, the collective prediction project. The other one I think is interesting is a project on measuring collective intelligence. Now, the word intelligence suggests something we know quite a bit about in humans. We've for over a century been measuring human intelligence with tests like IQ tests. And the phrase collective intelligence suggests that there's a useful analogy there and so in this project we're gonna try to exploit that more directly. The, even though I said at the beginning it's hard to define what intelligence is in any precise way, there is a precise definition in the field of psychometrics that is the sub-branch of psychology that deals with measuring things. The definition of intelligence there is that thing which correlates with a very wide range of performance on a wide range of tasks. So it turns out to be a kind of surprising if you think about it, but true fact that how well a person does on one intellectual task, whether it's mental arithmetic or knowing vocabulary or being able to do mental rotation of figures in your mind or a variety of different things, how well you do on one of those tasks is a pretty good predictor, not perfect but certainly statistically correlated with how you do on a very wide range of other tasks. It also predicts your behavior not only on kind of tests, on laboratory tests, but it predicts your behavior in a variety of life situations like how well you do in school, how well you do in many kinds of occupations. I was surprised to learn it even predicts life expectancy, statistically significant, not perfectly but at a statistically significant level. There's no reason in principle why this had to be true but it turns out to be true of human brains, that there's a wide range of what we think of as intellectual or cognitive tasks that are correlated with each other. The question then is, is that true for groups of humans or groups of humans and computers? Is there such a thing as the psychometrically measurable intelligence for groups? That is, is it true that a group of people or people in computers that does well on one task will do well on average on a whole bunch of other tasks in some wide range of tasks? We don't know whether that's even true but our project on measuring collective intelligence intends to first begin to answer that question, is there such a thing as collective intelligence in this sense? Even if there isn't a single factor in individual psychology, that's called the G factor. Little G is a measure of general intelligence. We don't even know if there is a single G factor for collective or group intelligence. Even if there is, we think that will be very interesting. Even if there's not, we think there'll be some subfactors or subtests that we'd like to know what they actually are for groups. And then we're especially interested in knowing what causes those differences and how to use them to make groups more intelligent. In the case of individual humans, you can measure their intelligence and use that to predict things but it's very hard to change intelligence. We can't really reach in and rewire the brain or do anything like that. But with groups, it's relatively easy to change a whole bunch of things. How many people are in it? What kind of people are in it? What kind of communication they have? What incentives they have? So we think it's a very interesting question as to what we can do to a group to increase their collective intelligence, which you might think of as their flexibility, their adaptability, their ability to do a wide range of things well, not any one single thing well. So those are two projects I think are particularly interesting. Others may come up later. All right, now the last issue I want the three of you to sort of meditate about. It's partly triggered I think by a partial sort of skeptical response I had to Sandy's project because it struck me as I was listening, Sandy, that boy that sounds awful like we could describe it in a different way and call it a surveillance tool. And it seems to me, and learning so much, who is the gossipmonger in our organization? Who is the bottleneck who's causing? And the information that you need to generate to really read your graphs also requires things that one might regard as invasions of privacy. So the broad question is, what are the limits of collective intelligence? The utopian discourse that surrounded it has often acted as if there are no limits to what this wonderful new thing can do that individual human creativity is now as old fashioned as the printed book. And so the question is, do you folks see limitations? And is there a skeptical side to your response to the whole collective intelligence tendency? Since you mentioned the sort of privacy thing, I mean, that's actually why I didn't emphasize it. It's almost the core point of what I'm talking about because like it or not, we all walk around with cell phones. Cell phones know where they are. They can also have microphones. They know when you're talking. So your cell phone company knows a lot about you and you can talk about that being used in various, defarious ways. Similarly, most organizations or many organizations have name tags or little RFIDs in them. And one of the core topics we have is to look at this trade off between privacy and advantage. And the take we have on it is, is we would like to learn what it is that's of advantage to the individuals that they can make out of this information and how little of that information do we have to actually put collectively in order to get that advantage. In other words, what are the limits on what changes in sort of traditional privacy things do you have to have to reap those benefits and can you get away with almost no changes, right? So the vision that we have and we don't know that you can achieve it, but given that companies are out there doing this to us already, we ought to investigate it is, can you build feedback tools that provide personal reflection to the person? And the only thing that's shared collectively is something that's aggregate and de-identified. If you could do that, then there's at least the grounds for some grand bargain between the sensors-based society that we've already become and the privacy that we'd like to maintain. And in terms of limits to this sort of stuff, of course there are limits. I mean, this is no panacea for intelligence. This is simply saying we have individual intelligences, we're very poor at pooling them, but we know some things about what are the common errors we make and if we can detect those common errors and feed them back to the individuals, then perhaps we can do a better job at doing the things that we already know how to do, but now becoming more aware of the problem states that we typically get into. Yeah, just to respond to that, I think part of what's behind your question is a kind of fairly widespread utopianism or sort of magical thinking that a lot of people have these days that, for instance, a lot of people who've read the book, actually a lot of people who've heard about the book, The Wisdom of Crowds. The people who've actually read it know that Sir Wiki, the author, doesn't actually say this, but a lot of people who've heard about the book think that what he says is crowds are wise and wonderful and if there's any problem, just throw a crowd at it and things will certainly be better. He doesn't say that. It's certainly not true, but there are a set of people, a foot in the world today or a feeling of foot in the world that there's something magical about collective intelligence. It's not magic. It sometimes works well, sometimes it doesn't work well. I think it's actually far too complex a thing to answer with any simple set of do this, don't do that or these three situations work and those seven don't. It's much more complicated than that. It's kind of like saying, what are universities good for? Well, they're good for a lot of different things and it depends a lot on what you need and what you want and wish university and what you're using it for and a whole bunch of things. You can't answer it any simple way, but I think part of what we wanna do is help put a more firm scientific foundation under the questions and the discussions about what's it good for and what's it not in what situations does it work and which ones doesn't. I think one way of summarizing it is to say, in order for it to work well, you need a way of collecting the right people and computers and you need a way of connecting them in the right way. If you fail to do either of those things, then it's not gonna work. If you don't collect people who know anything about the answer, no matter how you connect them, you're probably not gonna get a very intelligent answer. If you've got a lot of intelligent people but they're connected in a stupid way so each one is out for themselves and nobody's trying to solve their overall problem, then no matter how intelligent the individuals, you're probably not gonna get a very good collective answer. But there's lots and lots of examples of ways you can go wrong and things you can do to try to go right in both of those areas. That's actually the point of the stuff that I'm doing is to try and build tools that let you do it the right way by providing feedback to people. Yeah, if I can just add, so I don't think this is a universal solvent for most of our problems. I mean, I think it's part of a portfolio of approaches that we'll take to try to solve some difficult problems. Speaking about prediction markets, I've done some work at a major search company where they use prediction markets to help them make some decisions so they try to accumulate the distributed knowledge of this large organization to help with managerial decision making. And what's interesting is that all the data shows that the markets are accurate and decisive over many types of questions. But the issue there is that managers don't wanna use them. Managers that sometimes my school produces or Sloan School produces don't wanna use these prediction markets. Why? Because managers have been geared to be the information hub for most organizations. They're the ones that know all the answers. They're the ones that keep tabs on all the workers. And if all of a sudden, if you can imagine the situation, you're saying that your product is gonna ship by this date and there's a market out on this prediction, right? And the market is saying this and you're saying that. How do you explain that to the executive team, right? And so what we've seen is that there are actually limits, organizational limits to thinking about how these types of collective endeavors of intelligence systems can work. For instance, we don't yet have a course at either HBS or Sloan on community management, right? How do you actually manage a community? Like what does Linus Torvalds do or what does Brian Billendorf do to enable the Apache community or the Linus community to work? We don't have those distinctions there yet. And when we ask the people actually even in these communities about these questions, they in themselves have some reflection, but we don't yet have proper mechanisms that enable us to think about community management. Similarly, I think there's some technological issues as well. We have a supposition that work can be modularized, designs can be modularized, right? And that there's enough granularity in the work that in fact we can distribute the tasks amongst many people and then we can put them together. It works beautifully in software, but how would that work in drug development? Where there's a lot of complex work to be done from discovery in the bench to medicinal chemistry to trials? So if you were to think about an open source pharma perspective, how would that actually take place? So there are some technological barriers for us to think about. One of the hopes that we're seeing is that, and this is my colleague at HBS, Carlos Baldwin, who's done a lot of work on modularity. What she talks about is that most products have an information shadow. And maybe we can take that information shadow and put it in silico, put it in computers, do a lot of the simulation work in computers and then go back into real printing of these things and then trying it out to see what happens. But again, those things are further apart. It works great with t-shirts because with t-shirts, if you see an image on the web of a t-shirt, even though it's a material good, you're basically sort of looking at it from an information point of view and trying to make sense of it. But again, if I push the limit to drug development, it becomes more difficult. And then finally, again, the scale and the limitations, I think are also legal as well. I think that in luxury property infrastructure and our perspectives about it is a major question mark for us. We don't yet know how we're gonna allocate the rents, the profits that might accumulate from these types of these settings. We see Mozilla Foundation and Mozilla Corporation as an attempt to try to make a balance between hiring people and having a community and sharing the IP and the benefits that come from IP. But I think there's a lot of legal issues around what forms, what legal forms of organizations will come together that can take advantage of this, especially outside of the traditional closed organization. Thank you. We're on the verge of turning it over to you folks. Get your questions ready. Let me initiate this process by asking a question that is maybe half unfriendly. Not that I don't feel great affection for all three of you. I mean intellectually unfriendly. I tried to imagine myself as a member of the audience who knew nothing about collective intelligence and just came into sort of, just had heard the term two days ago and was really curious about it. And I think one reaction such a naive observer might have had is, my goodness, you guys are just talking about consumer applications, business applications is the best we can hope for for collective intelligence that we can sell more t-shirts in creative ways. What about applications of collective intelligence in realms of culture and human behavior that do not involve profit and loss, that do not involve corporations? Now, what I'm really asking you folks to do is speak a little more broadly about this question. So I have an example. First of all, I think Wikipedia is clearly an example of that already. But one of the other projects I didn't talk about yet in our center is one that I think would fit the description you're giving. It's a project to attempt to harness the collective intelligence of thousands of people around the world to help solve the problems of global climate change. Now, if ever there was a problem that we need to harness as much collective intelligence as possible to help solve, seems to me that's probably one of them. And so we have a project that is trying to use several different kinds of technology to help people collectively propose and analyze plans for dealing with global climate change. Now, in particular, what we will probably focus on is government policies for dealing with climate change. Markets are pretty good at allocating resources and encouraging innovation in areas where the right incentives are operating. But governments are not necessarily good at figuring out what incentives to establish in the first place. And in particular, here's a case where at the moment we don't have very good incentives at all to encourage markets to come up with innovative new ways of reducing carbon emissions and so forth. So the core idea in this project is to use a combination of three different technologies. One is computer simulation technology, but in this case, massively multi-user computer simulations. So instead of just having a few people, experts at a place like MIT go off and come up with some very elaborate and very good simulations and then tell the rest of the world what they mean, it would be sure nice if anybody in the world who was interested could look at the simulation and say, why is this thing here and somebody should either be able to answer it or they should be able to try doing it a different way and see what happens. Now there's no guarantee that everything that everybody does would be sensible, but one of the other things that we wanna do is have to use technologies for collective decision making, by which I mean things like voting and perhaps even prediction markets and other things like that, to let people kind of vote on the assumptions or the policies that seem most promising or most accurate. So the stupid things will stay at the bottom and the good things will more or less filter to the top of the lists that people have voted on. And then finally we wanna use technology for what's called argumentation so that instead of just having kind of long-winded, kind of confused mailing lists of people going back or even Wikipedia-like wars, we can put some more structure on that where for each issue, whether that's what should be the value of this parameter like the percentage of carbon that's emitted on the surface that ends up in the upper atmosphere or that's one example of an issue, another example of an issue would be what should the carbon tax be? Zero percent, three percent, five percent. For each issue, you can have a series of positions, so it should be zero, three or five percent. And for each position, you can have a series of arguments for and against. This is the evidence why I think it is this, this is the reason why that's wrong, it should be something else. So by structuring the discussions in that way, we think we can help make them much more effective and less kind of chaotic. So that's one example. There's some great examples. Sunlight Foundation in DC has set up Open Congress to really open up what's happening on Capitol Hill and is trying to enable citizens to both observe what their representatives are doing and give feedback. What was the name? Sunlight Foundation's Open Congress. Sunlight Foundation. And they're really pushing the edges on thinking about civil society and the role of citizenry in feeding back into Congress and Capitol Hill. Another great example in the arts is CC Mixtor. So the whole Creative Commons world that has taken off where people are licensing their creative expressions, whether that be music or writing in a way that allows somebody to take that and remix it. And CC Mixtor is a great example of people taking artifacts from that domain and making it happen. So we're definitely seeing that. And certainly, to some degree, even though it can be very crass, it's sometimes YouTube and sort of the remixing that goes on within YouTube and how people get inspired by each other, I think is a great harbinger, I think, of what's potential and what's available for us. So I have sort of three things I'd like to mention. One is sort of at the very finest grain. We've looked at applications of this sort of technology for detecting depression. It turns out you can detect depression fairly accurately by looking at people's pattern of socialization and interaction. And the problem with depression is you're not aware that you're depressed. You just think you're a bad person. So I'm serious. This is almost the number one killer in our society. It's the number one cause of lost days of work. And the problem is curable, but it's not detected. So by looking at that sort of pattern of behavior, there's hope for actually providing a reflective aid to people to be able to cure this. This is a really interesting thing. So we've done some clinical trials already that are very promising. At a larger area, it turns out that you can use the same sort of technology I talked about to detect what I would call societal discord. So for instance, Nathan Eagle, who a student worked with me, recently looked at patterns of communication, cell phone communication within the UK. And it turns out you can very, very accurately predict what the UK rates as social integration in their different councils, those sort of town centers. So you can tell what the UK government rates as social integrations, a strong measure of social health, by looking at the pattern of communication. Correlation coefficient there is something like 0.75. It's incredibly strong. And the final thing I wanted to mention is you mentioned the Legatum Center. It's a brand new thing here at MIT, just starting off. It's something I created with Iqbal Qadir, who's sitting in the back there, I think. Are you still here, Iqbal? Oh, maybe he got off. Well, whatever. It's actually intended to be a collective intelligence endeavor. So its goal is to bring change agents from developing world to MIT to think about solutions to the problems of their country, to share those solutions with other people, and then go back and maintain a network among them to propagate the best solutions among these change agents in their country. So that's precisely the sort of collective intelligence thing that Tom was talking about. It's experimentation, it's communication, it's selection of the best solutions and propagation of those solutions. So we just got a very generous gift, $50 million. We're going to have 300 such people here in the next 10 years. And we hope to come up with some really good solutions to poverty, health, et cetera, et cetera through that. It's now the audience's turn. I'm going to stand up so I can see you better. May I ask also that when you ask your questions, come to these microphones on the side and identify yourself for the recording, for the permanent record that we keep of this event. Since I don't see anyone jumping up, there, good. Is this on? Is this on here? Yes, go. Art Hutchinson with the Cartigic Group, our consulting firm. Dr. Mullen, you made a reference I thought was intriguing to the advent of the microscope. So it got me thinking down this line that it was touched on a little bit later on about the science of the small ultimately led to the Heisenberg uncertainty principle and measurement uncertainty, changing of behavior. So the question was a very general one, which is to what degree do we understand now how technologies like the sociometric badge affect behavior and what happens at the boundaries? I was immediately thinking about the interactions and businesses that take place on the golf course and then the bar and after hours and on weekends that are immensely important. But I'm going to take my badge off now. Thank you very much. It's an interesting question. What I think you're saying is that these technologies are not just microscopes that let us see what's happening and what would have happened anyway. There are also, or at least in many cases, would be interventions that change what's happening. I think that's an excellent point. And I think that one of the questions it raises is how can we use them to change things in ways we think are good? So how can we use them in ways that are not invasions of privacy but that people like and that make things more efficient, et cetera? I think that's exactly right. The reality is that this type of thing, regardless of what I do, is happening because more and more of our life is being monitored and more and more of it's being mined. And we have to understand where is the good and where is the bad. The particular case you mentioned is interesting because the sorts of things that we measure turn out to be very correlated with things that we know as charisma and things we know as Asperger's and autistic spectrum behavior. People are very different at their ability to be sensitive to their social environment and to act in an appropriate manner. So one of the interests in this type of technology is to give people a reflective aid that lets them train themselves to be more competent at this type of thing. So you might take off your badge, but you might do it only after you've spent quite a bit of time buffing up your ability to be socially appropriate. And that has, as I said, sort of a medical side to it. But it has a big ramification for society because one of the things it does is it makes us aware of what I would call the mechanism of charisma. Why are some people more persuasive? It's not because they're smarter. It's because their style, the way they present things in many cases is so compelling. I personally would like to build something that immunizes people against that. I've been in enough situations, boards of directors where there's a few charismatic individuals. And it just is a disaster. I think we all need a little badge that starts flashing red when we start getting our brain manipulated. Over here first. Left. Hello? Hello. Hi. I have two unrelated questions, both of which start with a letter P. The first one is physics. And I remember about 10 or 12 years ago, Ralph Abraham, I think it was, a very famous one of the creators of chaos physics at Santa Cruz wrote a book on history where he looked at the long burifications and waves of history. And I was wondering how chaos is affecting your various studies and is that part of what you're doing? My second one is pedagogy. And how do you train people or how do you teach people to actually become citizens who can actually do this work and do this high-end material and work with each other? And so those are my two questions. So on pedagogy, there's a great quote about Wikipedia, which is that it's not supposed to work in theory, but it works in practice. And I think what we're going to see is that the practice is an elite theory and is an elite pedagogy, that we're going to learn from people actually doing it and from there try to extract out what's going on. So I think we're still just right now just cutting the edge of what is the pedagogy around how to behave in these things? I'd like to encourage the audience, members of the audience who know about collective intelligence projects of various kinds, especially those that are not corporate, I'd be happy to hear from some of you. I know there are some classroom, there are some things going on in the classroom that make use of Wikis and new ways of empowering students by having them create Wikis. And I don't know a lot about that sort of work, but I know it's already relatively common in universities. With regard to pedagogy, I agree with what he just said, is that I think that we have, for the last few centuries, been bought into the enlightenment vision of an individual mind that's separate from all the others and that we are in conscious control and rational. And in fact, I think we're a much more creature of our social environment and that the collective opinions and interactions determine what we think and what we do far more than we like to admit. And we have very little theory for that, in part, because it just hasn't been what we focused on. We focused on the interior and not the connections. And so maybe one of the good things that will come from these new ideas is to focus more on the connections between people and how those influence what we think, how we act, and we will develop a pedagogy around it and our greater sensitivity to it. So if you want a response to the first part of your question, the chaos part, I think the connection you're making is that chaos theory, at least in part, deals with emergent behavior. It deals with coming up with theories of what happens when you have a lot of more or less independent things, or at least mostly independent things that kind of interact in ways, often only locally, but out of which emerges some kind of coherent behavior. I think the way I just described that also applies to many interesting examples of collective intelligence. So collective intelligence, in that sense, is an example of a kind of phenomenon that chaos theory has analyzed. I think collective intelligence, by using the word intelligence, focuses more on what you might think of as the cognitive aspects of things, the intelligent aspects of things. Sir, over there. Hi, my name is Rob Dardy. I'm kind of an entrepreneur slash bohemian. I have a question for you. It has to do with social constructs since electronic communications expanded, have obviously expanded as well. The ability to pass information has become a lot easier to disseminate to large audiences. But with that, there's also an important barometer in social interaction that's gone away, and that's accountability. Within a social structure, you're accountable for the things that you say, because the people who you're saying them to know you and your interactions with people are all based off of the social recognition of you as an entity. The internet's created a lot more anonymity. So when you're looking at something like this from the correlation of intelligence between people, and especially between people and computers, without accountability being a factor, how is it that you can see ways to keep mass stupidity from happening? How can you look at ways of looking at that and recognize when those things happen? I think a lot of us are feeling very jaded in a political way. Everything that happens, people just kind of say anything, and it's supported by something that's listed on the internet. So how do you factor that into the equation? I think that's a double-edged sword in the sense that while we see these concerns around the lack of accountability, we also see that collectively, there's been some great episodes of people figuring out people that are breaking the rules or are brought to account. So a good example that comes to my mind, again, just knowing the software space, is when the code for Half-Life 2 or 3, whatever, got stolen from the servers. And Half-Life was a massive multiplayer game. It was a huge community. And it was the community effort, people working together who said that this is actually bad for the community, that they were able to help the police and FBI. I figured out where the theft happened, who was the person, the group behind the theft. So on the one hand, you could hide in the lack of accountability and get away with it. But if you apply the same type of principles to holding things to account, I think that it can actually, it can be used as a way for accountability as well. But I think it vacillates back and forth, and I think it's very much case dependent, from my point of view. I'd say, if I'm understanding the question, I think the question comes from what Mitch Resnick here in the Media Lab would call a centralized mindset. In the centralized mindset, which many of us have even unconsciously without thinking about it, if there is a problem, the way to solve it is to put someone in charge, to make someone responsible. And that actually works pretty well in a lot of situations. It's kind of the basis of a lot of hierarchical organizations. But I think a key point of what we're talking about today is that there are limits to the centralized mindset or to the applicability of the centralized mindset. There are a lot of situations, not all by any means, but there are a lot of situations where good things can happen without anyone being in control, without anyone being accountable, without anyone being responsible. Wikipedia is a pretty good example of how good things can happen with very little accountability. Most of the people who do things there are really not accountable at all in the sense you mean, and yet very good things happen. In a sense, free markets are an example of that. There's no one who's really accountable for how cotton gets distributed among people in the world. But somehow markets do a pretty good job of doing that. So I think that in a sense, the question comes from a point of view which is limited. Let me add one other thing to that, which I think plays off of this. So when you asked your question, it made me think about sort of what the lack of anonymity, it's sort of the centralization sometimes does. So examples are, the way people do credit card fraud is they look for unusual behavior. So they're collecting bits of information and concentrating them and characterize a stereotype in you to be able to, oh, that's an unusual one, let's go get it. And that's a pretty scary thing. I mean, obviously it might have its good parts because it keeps you from stealing things perhaps, but anonymity is not the way it looks, right? There's all these traces of behavior and they have good things and bad things and I think people insufficiently appreciate how much of that's in the real world now. So two examples that I know of that are on opposite sides that are very similar to that are in India they have a very large terrorist problem, thousands of terrorists per year by their definition and most of them are caught by cell phone records. Now the cell phones are not actually things that are registered to the person, they're paying advanced cards, but by looking at the pattern of interaction and location you can figure out who people are and they claim 80 to 90% of people are caught within a week or so of a terrorist incident by that. On the other side, people are looking at detecting outbreaks of things like SARS using exactly the same methodology. When you suddenly see everybody in an apartment building not going to work, it's time to get out the medical police and go see what's happening because that's not supposed to happen and that could be a really important thing for entire society. So this notion of centralization is a really complex one. It has both good sides and bad sides and it's often the centralization that's the scary part, at least in my thinking about things. But Alex, again what's implied in what you're saying is a totally society so full of surveillance at every level that we will even know whether the residents of apartment buildings are going reporting to their work on time. If this is what we need to do to have collective intelligence, I vote no. Well the question then is how will, how, if that ups the odds that your children die by 30%, are you still willing to do it? I mean it's a realistic thing. We're not screwing around here, right? I guess I would quarrel with the, at least I would want to question the premise here. I mean parents can tell when their children are ill. Neighbors can tell when their children are ill. Influenza killed a lot of people regardless of what the parents did. Maybe so, but I mean do we really want to say, okay the government, some centralized agency ought to be some sort of locust. That's the centralized thinking again, right? Well someone has to look at the data though, right? No, that's not true. That's absolutely wrong. You don't have to draw concrete. Absolutely not true. That's the centralized thinking. If everybody locally notices that their neighbors are not doing something, that's not a centralized thing. It's a collective recognition. It's a collective recognition. That's the type of thinking that we need to move to, to avoid these big brother problems. Because the big brother stuff is really scary. It certainly is. I mean it's less scary in this country than it is if you look back in history or you look at other societies, but it's really scary. So we have to find ways to not have that. That's my point. Just to build on that a little bit, I think one of the things many people worry most about in issues about privacy is not actually just loss of privacy. It's asymmetric loss of privacy. If you know about me, but I don't know about you, then I'm definitely in a worse position. But if you know about me, I know about you, everybody knows about everybody, then many of the things we worry about with invasion of privacy aren't actually problems after all. I grew up in a small town where most people knew a lot of things about almost everybody else. And there are good things and bad things about it, but it's certainly not any kind of Orwellian nightmare. It's just a different attitude about privacy. So I think many of us who grew up in urban or suburban environments with a certain set of expectations about privacy have kind of limited intuitions about what it's like to have symmetric lack of privacy. Certainly a great literature though, Tom, of small town authors who write about the horrors of being confined in a small town environment in which everybody knows what you're doing. So they're good and bad things. I would say, read Winesburg, Ohio. I grew up in a small town, but I'm glad I don't live there. So there are nice things about small towns too. But the thing that we're doing here is we're saying there is a real need to re-examine the basic contract and expectation around privacy and the commons. There are some great things you can get from pooling information. There are some terrible things. Some of them feel good in humans. Some of them feel very unhuman. And well, at least for my agendas, I want to explore that space so we can figure out which things feel like human society still, but give us some of the protection that we might want to have or some of the efficiency. I think that it may be possible to do things about some of these disease outbreaks around global warming, about many things in this sort of cooperative way that's not centralized, but we have to work that out. Yeah, I think what David is worried about is that the tendency to centralize is so grounded in most of us that decentralization is actually very counterintuitive. That when we think that this notion that we have a problem, let's put our men on it, right? And let him go after it. Or let her go after it. I mean, I think that this, the intuition most of the time is to centralize and it's not common sense yet for us to be, so there's only one Jimmy Wales who came up with the notion of Wikipedia, right? And that's sort of taken off, but the technology for Wikis has been around for a long time. And so again, I think that it'll take some time before a generation grows up sort of thinking that the better it organizes to be decentralized. And it won't always be the case. Yes. But I think in fact, in my book, The Future of Work, I make a very detailed argument for why I think it's going to be more often the case that the market share of decentralization versus centralization is going to increase. So, Kevin, identify yourself. My name is Kevin Driscoll. I'm in the Comparative Media Studies program. I would say I am not alone in being surprised at the number of people who choose to write and edit encyclopedia articles for fun. And so what makes it surprising that Wikipedia works in practice is that initial expectation that it wouldn't work in theory. So I come to this. You might be one of the first ones to be that way. Sorry? You might be one of the first ones to sort of be surprised by the older view that why are people doing this anyway? No, I think he said he had the older view originally. Did you have the older view or no? I had the older view. You did, okay. And I would say that Jimmy Wales had the older view when he did Neopedia or whatever the expert version of his earlier encyclopedia projects were. So I come to this forum with an education background. And there I think that free and open education, a collective development of curriculum materials seems like in theory it should work because all of the work is happening. The patterns of collective development are already in place in teachers' lounges and all across different schools in the nation. Yet in practice it's having a lot of trouble. So Sandy mentioned something that triggered some thinking here for me, which is that in that community, face-to-face is how you develop trust. And so it's very easy to collaborate with other people in your department or other people in your institution because you have some layer, some foundation of trust. I think in conversation, most teachers would say that they would be very willing to participate in a collaborative development process. However, in practice it's hard for them to transgress that expectation of face-to-face trust. So I'm interested to know if you have examples of similar communities that are based on that kind of interpersonal trust, yet share the values of a collective development and how they've been able to push past that problem. So let me give, I think what is a very salient example for learning, which is, I had a master's student last year that was interested in education and concerned as you are about curriculum and so forth, and was from central India and wanted to do something there. So in central India most of the schools that are actually anything like effective are private. But parents have no way of assessing the value of the curriculum. And there's very limited resources for curriculum development. So what he's done is set up a standards body that takes the best practice schools in the area because curriculum is always local and spreads that curriculum and best practice to other schools. So it gives the schools a certification that they can present to the parents as a means of reliability. But it also introduces a competition between the schools to produce the best curriculum and then to spread that and the hope is that that will ratchet up the quality and the uniformity, uniformity of quality of the curriculum over time. So it's what you said, it's a collaborative development but it's at a different level of granularity, right? Can I ask you to clarify that just one minute? So is it, is the curriculum leaving the walls of the private school or is it remaining the intellectual property? No, no, it leaves the walls. What they are is they're a transport and competition mechanism. So they're the machine that allows sort of genetic material to spread to other ones and the more successful ones to prosper. That's essentially what they've done. Now, you know, it's early days, we don't know how successful they're there, but it's very precisely a collective intelligence thing and it's around education and curriculum but it's at a different level of modularity than the thing that you did. The other side of what you said was trust. I think trust is exactly it. Or perhaps more generally sort of social integration, something of that sort. So for instance, when we looked at all the groups here in the media lab, the research groups, the highest performing groups were the ones that had the best measures of social integration and you could tell the social integration from their pattern of cell phone usage. So, cell phone usage. Yeah. People behave differently when they spend time with each other, when they trust each other, when they work cooperatively with each other. We leave these traces of behavior everywhere and they relate to some of the things we really care about. Like, how do we relate to our work group and how productive are we as a consequence of having a better level of social integration? If you could identify, I see that you can identify or at least you claim and I accept your claim that you can identify the healthier or the more productive groups as against the less productive groups. My question is, is there anything in the apparatus and the technology or the strategies you're developing that would let you explain why that's true? Sure. Yeah, what are the differences? Well, the differences have to do with, as you said, essentially with trust. With trust and willingness to engage other people, of your tendency to engage other people within your group. So, for instance, if you, I mean, just to be very cartoonish about it, you know, if you only talk to the same three people and that's all you ever talk to, there's not a lot of information and critique and so forth in your group and you're not going to have a very productive, satisfying experience along many dimensions. On the other hand, if you have a lot of friends, if you feel open, if you feel trust with the people that you work with, there'll be a much more diverse set of interactions, the pattern of discussion will be broader and you can see that and that's indicative. It's not, you know, one-to-one causal necessary. I'm not claiming that but that's characteristic of groups that have trust, openness, high social integration. So I was gonna generalize your question a little bit. To me, I think trust is important but there's actually a bigger thing of which it's a part that's even more important or more useful in thinking about these things in many cases and that is motivation. Basically, what would motivate the teacher to contribute to this or what would motivate the person to contribute to Wikipedia, et cetera? Trust and kind of social connections to other people is one thing that sometimes motivates people to do things but there are others too. In Wikipedia, you know, you might just like the actual act of contributing to encyclopedia whether you trust the other people there or not, you still are motivated to do it. I mean, economists certainly tell us that a lot of people are motivated by money and that's true and so whether you trust someone else kind of deeply, if you at least trust them enough to do a transaction with them then that may be enough of motivation to get you to do something. I have another example, kind of like yours of a failed example of collective intelligence. It's one that we were involved in here. It's not a complete failure but in a certain sense it was a failed example. We were involved in a project to write a business book, Wikipedia style. So it was a joint project with MIT Sloan, Fortin School, Pearson Publishing and a consulting firm called Shared Insights. And we were inspired by the idea of Wikipedia and said, why can't you write a whole book that way? The name of the book is We Are Smarter Than Me which is a kind of attractive idea and the project got quite a bit of publicity and enlisted over 4,000 people on the website. People who registered and contributed in some way or at least potentially contributed in some way to the book. So that all sounds good. The part that didn't work was that even though 4,000 people registered, only a few dozen ever actually contributed anything and frankly most of what they contributed wasn't very useful. So in the end what happened was that a team of professional writers was hired to basically write the book drawing in part on some of the things that people had contributed to the site. So a book was written, it's about to be published so in a certain sense it succeeded but as an experiment in having a community write a book, Wikipedia style, it didn't really work. The question was why do you think you had 4,000 people and then so few ended up being delivering? I'm glad you asked that because that was actually the point of telling the story. The thing where I think we failed was in motivating them. 4,000 people were interested enough when they read about this project to try to see what was happening and in order to see anything you had to register on the site so they went and looked but they weren't motivated enough to actually spend time contributing. Now there were a few people who were motivated enough but many of them I think probably didn't have the knowledge or the writing ability or the sort of insight to actually write a business book that would be very interesting. Some did but I think some many didn't. I think of the 4,000 people who registered there were probably plenty of people who had the ability to write a business book if they had been motivated enough to spend the time actually doing it but I think we failed at that first step of collecting the right people that is the people with the ability and with enough motivation to do what needed to be done. If we had gotten past that point I think there was a second step which we also hadn't really figured out which was how to manage the interdependencies between the different parts of the book. That's relatively easy in the case of Wikipedia because the interdependencies are only within a single article for the most part. The interdependencies between articles are much weaker things like style and philosophy. If you have a whole book, a business book there's a much higher expectation of integration and consistency and kind of flow and so forth between the different parts. And we had some ideas about how to manage that but I don't think we had worked them out well enough and I don't think, I think we would have had to do more work to get that to work even if we had motivated enough people. Yeah, and to be honest Tom, I think you definitely have this motivation problem as to why would somebody participate in the first place so what would motivate that teacher to disclose their teaching plans for their science curriculum or something else like that. I think secondly, and if there's no sense of community amongst teachers where they could identify themselves like we have an open source, like people say I'm a hacker and identify with this community, that matters. I think secondly, also important and I think actually to some degree a substitute for trust is evidence of performance. So in the open source world, we have the village idiot in each community, which is a compiler. It tells you how good your program is, how well does it work, does it compile or not, what are the problems, right? And the reason we have such heated debates in Wikipedia about articles is because there's no objective arbiter of quality. In software we can do it, right? But in other settings we don't have those things yet. So having an evidence for performance becomes sort of a substitute for trust. And sort of thinking about if we're gonna be in a distributed world with many people participating, many strangers participating, and we don't have the time to build trust in the traditional sense, what are some substitutes? And again, we have to think about what that means for us along the way. And then Tom's early work on interdependence actually goes back to it as well because how closely tied the work is, my work is with yours and how much my work depends on your work is also very important. My name is Dev. In Gupta I'm a first year student here at Sloan. And actually my question is a nice segue from the book you referred to. It's related to this idea of collective versus individual intelligence. So if we think about the internet as a form of collective intelligence with the ability to search vast troves of data and access it quickly, what's the impact of collective intelligence on individual intelligence? Are we collectively becoming dumber by the fact that we need to recall less? How is that gonna impact us as individuals as we think about what defines intelligence in the future? Well, of course it all depends on what you mean by the definition of intelligence. Most people would not define intelligence as the number of facts you know. So the fact that you don't have to remember all those facts if you can easily find them on Google doesn't mean you're less intelligent. It doesn't mean you're more intelligent alone either but it does mean and I think this is a key point that the combination of you and Google is much more intelligent than you alone were before. So I think that's the main answer I would give to your question. I mean, some people worry that if you give kids calculators and they forget how to do arithmetic, does that mean they're losing something really important? You know, I'm not a big fan of that argument. I think if you really, I mean, you need to understand how arithmetic works but memorizing every single multiplication fact, if you're genuinely assured of having a calculator with you all the time probably isn't worth the effort. Tom, suppose you carried it further and asked about say grammar checks and spelling checks. Yeah, I mean, I'd say that's right. The same thing that you, yeah. I mean, if there's a real risk that you're gonna need to produce the behavior and sometime when you don't have the mechanical prosthesis, then maybe you better learn to produce the behavior by yourself. But if you're genuinely assured that all the times you really care about producing the behavior, you're gonna have the mechanical prosthesis with you, why bother to learn to do it yourself? I actually think that the question illustrates the sort of brainwashing we've had about the character of intelligence, that it's an individual characteristic. I mean, people don't have all the facts in their head. One of the major phenomena we have is called transactive memory, which is that you know who to ask to find out things. That's a major store of the things that you know. It's called tacit knowledge in some cases. So it's actually the thing that makes organizations, families, other sorts of things still. It's not that big stuff in your head. It's that you know where to go find it. And the way you find it is usually other people. You wanna quote for that? Sure. Herb Simon, many of you may know, a genius won the Nobel Prize in Economics and wrote seminal work in half a dozen other fields, including psychology and sociology and organization theory. He once said, at least reportedly, most of what I know is in the heads of my friends. It's absolutely true. And you know, we, the rhetoric however, around intelligence and competence is not that. It's that it has to be between your ears. And that's not true at all. It's certainly not true of any sort of high functioning manager or other sort of socially connected sort of a person. So, you know, I think there's sort of misunderstanding of what it is to be intelligent. A lot of it is this sort of trans-active stuff. Also, there's some really interesting things that are going on in the field of psychology now. I just thought I'd bring this up. It was a study that was in science recently that compared how good people were at making decisions for complicated situations under two different conditions. One condition is you were allowed to think about it and then make the decision. And the other situation is you were not allowed to think about it. You just had to make the decision. If you were not allowed to think about it, you were very much better than if you were allowed to think, if you had to think about it. What class of decisions? It's for complicated purchasing behavior, selecting items that have many, many different features. Clearly depends on the kind of behavior that you're talking about. Well, so they looked at different sorts of decisions and for simple things, thinking about it was clearly better, right? As the thing got more complicated, your intuition, your recognition of what feels right carried the day rather dramatically. See, that's the model for professors giving grades. They shouldn't think they should just do it. From the first impression. Right. Well, and that's also the work of how students grade professors, right? So from the first 30 seconds of your appearance, not what you said, that correlates extremely highly with your final rating in the course. So I always wear a jacket for the first 60 seconds just to be safe. And then I take it off over here. So just to summarize, it sounds like you're saying in a networked environment, the value of recall is near zero. But there are other forms of intelligence that I guess that intelligence itself is being redefined and if it were as ever characterized by recall. Thanks. I don't think it's being redefined. I mean, I think it's always been, as Sandy said, a collective, transactive memory-based endeavor. It's just that we can see it more clearly in this kind of a connected world. Yeah, I think that's right. Yeah, I wouldn't say that. And new forms of it are possible now in this connected world. Federico Lucifredi from Harvard and Susan Novell. When I was looking at Professor Pentland's data, I was obviously wondering about the Big Brother implications. But there is also another side that I don't think was addressed, which is that once you start taking decisions on that sort of data, there are tangible actions or rewards attached. What prevents people from gaming the system? If you're using an unobtrusive way to gather this data, like the tags that you've been using in your particular experiment, what prevents to use the Dilbert metaphor? What prevents Wally from going around humming to everyone in sight to be flagged as the greatest communicator in the company immediately subject to rewards or lesser workload? This is a very managerial question. But obviously, once you're attaching decisions to this, people are going to start and look at how they can game the system. People already game the system, right? People send out lots of emails so that they can look like they're paying attention. People walk around regularly and shake hands and act very positively so that they give a different sort of impression. This is really just observing those behaviors and then relating back to people the consequences of those behaviors. So if you do X, then that has these consequences. So you should think about it. You can still choose to game it. But if you put all that effort into it, why don't you do something that actually turns out to feel good and is productive? You have to tie to performance. In the end, you can observe these patterns of interactions. But if it's not tied to some observable outcome, good or bad, then all you have are these patterns. And I think Sandy's work is going to help us to look at the behavioral patterns but then tie it back to performance. I think that's actually a good point of a bad way of using some of these things. One bad way of using some of the data that Sandy is talking about would be to say, we're going to reward you for intermediate measures like how many people you talk to. A better way of doing it, I think, is to say, we're going to reward you for how many products you sell. But by the way, if you'd like some coaching, we've noticed that the people who sell the most are the ones who have the most contacts with other people. And so it might well help you to increase the number of people you talk to. We're not saying we're going to pay you just for talking to people. We're still going to pay you for selling products. But here's some advice that might help you do that better. That's this notion of a reflective aid so that you can. It's very difficult to know how you come across to other people. It's very difficult to remember how you behave because you're so concentrated on the facts and the day-to-day sort of requirements of what you do. So to be able to reflect on how you actually behave, how you actually come across to other people is useful. And then the sort of typical thing that we've done is we've also allowed people to anonymize their data and then compare themselves to other people who are higher productive, happier. As again, a reflective aid. You behave this way. People who have higher job satisfaction are a little bit different. They do this. Well, I'm going to tell you what to do. It's your choice. I'm Michael and I'm unaffiliated. You haven't talked much about managing and rating contributors and I wondered what you felt about that, whether it's a good thing, bad thing, whether it should be done by moderators or contributors or the consumers. And then the second question is systems figuring out what they should know but don't. For example, in Wikipedia, you look at different entries on similar people and somewhere short and somewhere very long, obviously, because someone took the time or was interested. Obviously, the system should be able to figure out that, hey, I should know this about this person. It could perhaps go and figure out who might likely even know the answers to these questions. Do you know if anything's been done in that kind of thing? So the first question was what should happen about rating contributors? This is kind of in the context of Wikipedia, is that right? Yeah. So frequent contributors to Wikipedia develop a real reputation among other frequent contributors. So that is, I believe, clearly a motivator for some people. Karim could tell us more about that. I think there's mixed feelings about rating people because as soon as you start to rate, we're back in this question about gaming it. So if what matters is ratings and how frequently you contribute or how frequently you are posting on the email list, then that's going to skew the behavior very differently. I know in certain open source communities that software people can easily look at an email list. You can easily look at CVS commits and sort of say, this person's obviously doing much more in the community than this person at the bottom. But in fact, when you look closely at patterns of my pet interests of innovation and novelty, where novelty is coming from, in fact, much of the novelty in open source, and we have now some evidence also in Wikipedia, that in fact is coming from peripheral players, people doing one thing and that having a big impact. And then the frequent contributors are polishing it, integrating it, and so forth. But that germ of that one idea came from that one guy that sort of did something and then went away, or one gal that did that and went away. But as soon as you were to, if you were to start creating incentives, once you have ratings, you can have incentives, you can have rewards, you can see where that leads you to. It's just gonna select on a behavior of numerical behavior, not on what matters to the community. So I think there is attention here. I think underlying your question, there's something about authority. Like who has authority to sort of say, if I make this edit in Wikipedia, right, that this should stand over some yahoo who knows nothing about innovation. And these debates about authority are again at the heart of Wikipedia as well, which they're all trying to figure out, like who has, do we look at credentials or not? The fact that somebody's a professor at MIT, does that count more than an 18 year old kid in Transylvania? And those things, again, are confounded when you sort of think about ratings and... I don't know that I was quite thinking about that way. I was more thinking of like, if you post something and it's out of left field, should perhaps the group be able to look at it and go, you know, and almost close and say, not on the person. Mechanisms that are cooperative, right, for rating. And that cures some problems, but at least my sort of impression, and probably you know more, is that those things tend to have a short time horizon. So, you know, the first time somebody said, you know, F equals MA, that was really from left field. But it's pretty good, right? So things that are really novel tend to get destroyed by cooperative ratings. So peer review in NSF is a perfectly good example. If it's really innovative, NSF won't fund it. If it's sort of part of the ongoing discussion, you've got a good chance. But do you really want to limit yourself to funding just ongoing discussion? Don't you want those radically new things? And those often take decades to prove themselves out. So in that form, anyhow. So I don't know that there's a good way to solve this, right? Nobody's come up with one. There's a second part to your question. Yeah, that was about systems figuring out things that they should know, but don't know. Well, Wikipedia does that by having pages that say, this is a stub, please contribute. So they do that to some degree already. Again, it's very much interest-based in the sense that there's nobody paying the volunteers to contribute to Wikipedia, right? And Wikipedia is driven by the idiosyncratic self-interest of the volunteers that are participating. And so you can't, and this is the sort of the thing that with Linux, there's always this complaint about the fact that Linux never scaled to more than four processors or eight processors. And everybody said, well, that's a problem with Linux. No, it's because most of the developers of time didn't have supercomputers in their basements to try to scale Linux up. And so when larger firms showed up with the resources, Linux scaled fine in the same kind of a distributed fashion. But to sort of say, the saying always is in these systems, like if you have a problem, if you have a bug, right, fix it yourself. We invite you to come and fix it yourself, right? And that's the best way to participate. Hi, my name's Steven Bushkin, and I'm with Latitude, we're a consulting firm. We work with media companies on strategy and content. And I'm interested in a couple of things. The first is, if you've dealt with any of you on the sort of difficulty of finessing the individual visionary piece that we see in media often against the sort of mass audience needs and the sort of collective wisdom and intelligence that's of the desire of the audience and in particular, the common idea of artistic vision versus sort of a more common vision, whether it's in the arts or whether it's in business. That's my first question. And I'll hold my second until you sort of dive into that. I think you're asking the question of can a group have an artistic vision, or is that something that's the property of the lone artist? Is that what you're asking? That's part of it, or how does the field of collective intelligence attempt to start to deal with the fact that we know that whether it's in business or in any field art or whatever, that there are people who do have a vision and have a strong vision that is artistically or otherwise manifested well. And having that pulled back by, quote, the masses or the audience in some larger way. And how to deal with that constant tussle. So I have some thoughts about it. One question people sometimes ask, which is kind of related to the question you're asking, though not exactly the same, is sometimes people say, well, can a group ever really write a good novel or a good piece of literature? Isn't that necessarily something can only be done by a single person? In part, I think the rationale behind the question is great literature requires sort of deep integration. The kind of things we didn't figure out how to do on the We Are Smarter Than Me book are the kind of things that a single individual, while it's not easy to do, a single individual can do often much more easily than a whole group of people, because one person can communicate with themselves better than people can communicate with each other. So part of the answer I give to that is it's certainly possible to create great literature in a group. In fact, probably the best example of that is the bestselling book of all time, which is the Bible, written by a group. In fact, written in part as a written record of stories that were repeated by people. In fact, legends in general are in some sense literature written by a group. And part of how the group works is that things keep getting repeated and repeated and some parts stick and other parts don't. And eventually, over time, the legend comes to acquire a kind of resonance and polish that I believe comes in part from the fact that it was kind of tailored and refined over many, many retellings by a whole group of people. So I think that's an existence proof that it's possible to write great literature by a group. There's another question, though, which is, is it possible now to write great literature with groups in some new way? Is there something about the new technology that's available to us and the way it could be harnessed that there is actually a chance of harnessing groups of people whose minds are so closely coordinated through the technology that it's now feasible for them to come up with kind of artistically insightful, elegant, beautiful, and coherent products that wouldn't have been possible before? I think that's an interesting question to which we don't know the answer, but the answer is certainly not no or not obviously no. I'm always struck with the fact of my perception that everything that people do is a dialogue with other people. So when you said great art, that implies an audience that appreciates it. And in fact, art is a dialogue with the people that critique it and respond to it. So it is, in one sense, at least a collective artifact. It's a resonance within a community. It's not the individual or the thing itself. And then, of course, the classic sort of example, which is weak in some ways, is the enterprise of science, which is a collective thing, which is, in fact, a story. I mean, all this business about truth and all that sort of thing. Science is a story that resonates and has a critique that relates to how well it worked out in the real world. But in fact, it's a story. And it's clearly a collective one. I would just add that, I mean, I think when you look at the history of many artistic millas that have popped up, there's always been groups of people working together. So when you look at French Impressionism or other sorts that people are always trading techniques or ways or perspectives together, that the artists come out out of this setting of people that are freely revealing their knowledge back to each other. I think there's a question in here about how much can you, typically those groups have been six to 10, maybe 12, 14 people or so. But now it's like, can you scale that up to 100, to 1,000, to 10,000? And then where does the individual artist fit in in a setting of 1,000, will 1,000 swamp out the one person? And I don't have an answer for that. But I think certainly at the low scale, we've always seen that rarely does an individual come out of nowhere. There's always in this setting of people working together and trying to make sense of the world and coming up with a new perspective. Is it applied to entrepreneurship or anything that's beyond classic art? My second question is whether you all and anybody in the field that you know of has dealt with collective intelligence as it manifests in different kinds of activities that is among which might be gathering intelligence as Wikipedia does and posting it, making decisions, executing decisions, which one would imagine that the collective intelligence manifests differently and more or less successfully in different kinds of endeavors in any kind of organization. I don't know if you've dealt with that or not. I'm not sure I understand the question. Something like Wikipedia, which is a gatherer of information has a different manifestation of collective intelligence than a business that needs to make a decision and act on it. Like a prediction market. For instance. And I don't know if there are sort of learnings already in terms of how collective intelligence works and works in which ways better in those different kinds of areas. Let me encourage Brevity, one last question after this answer is done, we have to stop. My sense is that the question is too complicated to answer in any simple way based on anything we know so far. Yeah, I think we should early with our research. Tom Center might have some answers for us in a while. Hi, good evening. Marshall Vale, I used to manage a major MIT open source software project. First there was a comment about the question about using an education on collective intelligence whether it's being used pedagogically. My wife teaches a freshman seminar here at MIT for students to how to use wikis and blogs together to collect that and improve their academic experience. But that's kind of, I would say it is early on. To Dr. Mone, I went to the HBS speech for the gentleman from Shared Insights who talked about the We Are Smarter Than Me book. And his list of ha's about it was very reflective of my experience in managing any open source software. It was interesting to see the list of kind of insumptions about how the collective intelligence would just solve some of these problems that the open source community on the software side had to deal with such as usually having a core group of people to kind of push most of the work kind of forward someone to sort of champion it. There are interesting lessons that we should, probably in Kremsley's search to pull out of that to make sure that it isn't just sort of assuming you get the community together and then, you know, step three profit. But on one thing I was very curious to see about the, how do we motivate people in contributing and seeing that sort of the networks they represented with the social badges. One of the things that's successful about open source is that the accountability is very open. We're leaving breadcrumbs as you'd commented in an open source environment. It's easy to reflect and see who the good performers are. As a manager in my current role, that research you were doing with the bank and the social networking and seeing that flow of information and having as an aid for performance would be great as a manager. But I was very curious if in your experiments you turned that information around to the entire organization and made that open and let the members of the organization self try to reflect and address it down on the community level as opposed to the managerial level. The only way we've been able to, the way we've proceeded, which works best from a number of points of view, one of which is buy-in of the participants is people get to reflect on their own information first and then they're able to contribute that in an anonymous way to a group information that everybody has available, okay? So including you can see what your manager does, right? What typical managers do. And it's a complicated thing. I don't claim to have it all worked out. But people find reflective aids to be sufficiently valuable that they're willing to participate. If you can see yourself as others see you, if you can reflect on what you do relative to others, that's something that excites a lot of people only if you think about it. So providing that as the base service and then allowing people to opt in to a more general thing seems to be a way to move forward. And then there's a lot of questions about exactly to how do you anonymize it, what do you measure? One of the things that is perhaps very interesting at the earlier spring of the CIO symposium here at MIT, there was a topic around personnel and hiring in IT organizations. And that there's a trend to get people who are more aware of their social connections. I've certainly had to deal a micro of people who have no ability to self reflect or don't want to. This is putting perhaps if the measurements are in place you're gonna give more weight to people who are more self reflect or more aware of their social connection as perhaps potentials for higher performers. Interesting implications there usually when corporations are very just results and don't care how they get there. Well you can think of it as a training aid too, right? I mean you say to people, you know, people who are aware of these things, you know have better success over the long term or happier or things like that. Here is, you know, sort of what seems to be your situation. Well if you wanna think about that a bit. Interestingly, by looking at how people talk to each other during interviews, not what they said, but that sort of first, you know, minute or so of interaction, you can predict quite accurately who will be hired and who won't be hired. It comes across very quickly. It's independent of, you know, qualifications, what they said, et cetera, et cetera. Thanks. We've come to the end. Thank you very much.