 I'm honored to chair Professor John Norton. I am Sugata Mitra from Newcastle University, and Newcastle, as you may know, is well known for thriving in a colder and more challenging climate. I'm sorry. I'm sorry. John Norton. John Norton is Professor of Public Understanding of Technology at the Open University. He is a fellow and a vice president elect of Wolfson College, Cambridge, an academic advisor to the Arcadia Project at Cambridge University Library. And what I know him for best is observers, internet columnists, and I'm one of his readers. He's co-founder of Cambridge Visual Networks, which is a Cambridge-based startup. He has a new book coming up in January. It will be called From Gutenberg to Zuckerberg, what you really need to know about the internet. So I'm quite looking forward to that one. Well, so I'm not going to take away more time from Professor Norton. Please. Hello, folks. Thank you for the invitation. During Britain's recent outbreak of recreational looting, one thing that struck me was the way in which the establishment seemed to be taken aback by the looter's alleged use of social networking technology. We were told by the mass media, and the news was enthusiastically, if mindlessly, relayed by the government ministers that a new and sinister force was loose in our society. It was called Blackberry Messenger. And as I listened to this establishment and media can't, the question that was uppermost in my mind was, where have these people been all these years? Because apart altogether from the irony of seeing a device, the Blackberry, which had once been the ultimate badge of corporate status and prestige, and now had apparently a metamorphosed into a working tool for hooligans, there was the fact that pin messaging on Blackberries is not only an extremely ancient technology, but is, or at least was, one of the reasons why governments across the world used to want Blackberries for their officials. And you will remember, I guess, that after Barack Obama became president, there was a good deal of comical speculation in the media about the fact that the new president passionately wanted to hold on to his Blackberry, despite the concerns of his security advisors about the wisdom of entrusting presidential emails to a wireless device. But behind all these amusing observations, there was a more serious question. Why is our society, apparently, always playing catch-up with technological innovation? Why are our institutions, companies, government schools, universities habitually ambushed and astonished by new technologies, particularly communications technologies? Why are technological innovations apparently so difficult to predict? And what might we be able to do about that? Now, at one level, the answer to this question is easy. The future is unknowable, period. When Harold McMillan was prime minister, he used to go grouse shooting in August. And once an enterprising journalist caught up with him and asked him a question. Prime minister, what keeps you awake at night? Events, dear boy. Events was Supermax's famous reply. You never know what's around the corner. Now, that might be true of politics, but it can't be entirely true of technology. Because, after all, no major technology appears completely out of the blue. It takes about 25 years, in fact, for a technology to become mainstream. The PC, just to take an obvious example, dates from 1975 or 1978 if you take the Apple II as the first usable manifestation of the technology. The TCPIP internet dates from 1983. And the ARPANET goes back to the late 1960s. SMS dates from the early 1990s. Facebook's been around since 2004. And even Twitter has been around since 2006. So if governments and institutions in recent years have been taken by surprise by what's going on, then they clearly haven't been paying much attention. Blackberry Messenger, BBM, provides an interesting example of this lack of attention. As the looting and disorder spread to the point where it could not be ignored even in the most expensive parts of Tuscany, I'm willing to bet that nine out of 10 government ministers and media commentators could not tell you the difference between BBM and SMS. No surprise there. But actually, many tech savvy observers overlooked BBM too. So obsessed were they with the iPhone-Android battle, the decline of Nokia, and the apparent eclipse of Blackberry in the smartphone market. What most observers fail to notice is that one important segment of the population, that is to say teenagers and university students, were buying Blackberries for one reason and one reason only, Messenger. Why? Because it offers free text messaging and is not only fast and reliable, but is also free. BBM is now used by more than 39 million people worldwide with usage up six times over the past 12 months, making it a real threat to operators' text messaging revenues in some markets. Which is why, by the way, that we didn't hear any bleating from Vodafone or O2 about the clueless proposition by the government that we might close down Blackberry Messenger. So some of the problems that we have with technological forecasting have nothing much to do with technology per se and quite a lot to do with our incapacity to notice what's going on. And that's what brought William Gibson to my mind. Gibson, you will recall, is best known as the guy who invented the term cyberspace to describe the world behind the screen. He's a stimulating novelist who has become a kind of sage of technology, even though he appears to take a pretty dim view of it. On Twitter, for example, he tweets as great dismal. He's also famous for an offhand remark, which to my mind is one of the smartest things anybody's ever said about technology. The future, said William Gibson, is already here. It's just not evenly distributed. But if the future has already arrived, you're only discovered if you know where to look and if you're willing to pay attention. So the question I draw from William Gibson's aphorism is this, why don't we pay attention to what's going on? And here folks, we dive into very deep waters. The recent history of communications technology is littered with instructive case studies of institutions and industries which failed to pay attention. Now the music industry is perhaps the best known. In 1980, music went digital with the launch of the compact disc. The first CD players, for example, appeared on the British market in 1982. So from then on, the recorded music business looked like this. The company's own recording studios where singers and bands went to make noise. That noise was then digitized. So the output of the studio was a stream of ones and zeros. In the consumers homes, there stood those shiny and also expensive CD players which were essentially machines for converting bit streams back into noise. After the launch of the compact disc, the problem for the recording industry was how to get those bit streams from its studios to the consumer. It did this by making plastic discs, pressing the bit streams onto them, putting on a label and packing them in transparent plastic boxes which always broke. The boxes were then packed into bigger boxes and then sent to crates which were loaded onto trucks which took them to distribution centers. Then other trucks collected the crates and delivered them to the retail shops which took the discs out of the boxes and displayed the boxes. Customers would then come into the shop, browse the boxes and eventually choose one which was then brought to the counter and reunited with this plastic disc upon payment of a fee. The customer then went home, inserted the disc into his or her CD player and low noise flooded the room. So what the industry did was to use Nicholas Negroponte's famous term, ship atoms to ship bits. This was expensive and inefficient. Upwards of half the cost of the CD was accounted for by distribution costs. But in 1982, there probably wasn't any other way of getting bit streams from recording studio to customer. And then in 1983, along comes the internet which is after all a global machine for shipping bit streams at very little cost. So you'd have thought that to the record industry, this new network would look like Manor from Heaven. I mean to say, a technology for eliminating 50% of your distribution costs at a stroke. If that's what you thought, you'd have thought wrong. Of course in the early years of the CD, that's the mid 1980s, the industry would have been justified in rejecting the internet as a viable way of distributing this product. Because at that stage, the net was effectively confined to the worlds of academia and research laboratories. But from the 1990s onwards, and especially as domestic broadband connections increased, the network came to look increasingly like a viable distribution system. And yet, the record industry effectively ignored it. Not only that, it persisted with the distribution system whose economics meant denying customers what they wanted. CD technology meant that single tracks were economically unviable. And so the industry tried to force its customers to buy albums. Yet the demand for singles or tracks never really went away. As we discovered when Napster, the first great file sharing system provided an easy way to get them. The MBA students of the future will pour over the file sharing phenomenon and the record labels response to the net as a case study in corporate psychosis. What other industry would ignore a technology that promised to harvest operating costs? What other industry would turn its back on what many of its best customers wanted most? What industry would refuse a way of providing immeasurably better service to its customers? The explanations are doubtless complex. Some of them have to do with the way the industry was structured and in particular how its executives were incentivized. Their bonuses were geared to shipping atoms, not bits. Other explanations center on the fact that the senior management of record companies were heavily freighted with accountants and lawyers. But whatever the reason, the fact is that in the 1990s the record industry looked the future in the eye and blinked. And it took a visionary from the computer industry named Steve Jobs to realize the potential of what somebody once called the celestial jukebox. A second case study is provided by an industry I happen to know rather well, newspapers. I've been a newspaper columnist as well as an academic, all my working life. So I have what my famous countryman Conor Cruz O'Brien once called a foot in both graves. Newspapers like most corporate enterprises are value chains. That is to say they link unprofitable activities, cost centers, with profitable ones in such a way that revenues from the latter exceed the costs of the former. In the case of newspapers, journalism, that's to say the gathering, collating, editing, publishing of news and opinion is an expensive, troublesome, loss-making activity. But in order to attain the readership and the circulations that make advertising profitable, one needed all that expensive loss-making journalism. It was the thing that attracted readers and sold newspapers, providing sales revenues and notching up the rates that one could charge advertisers. Now when the web first appeared, newspaper editors and proprietors thought that the main threat it posed would come from online news distributed for free. They were wrong. It turned out that the bigger threat was to the profitable part of their value chain, classified advertisements. As Evans and Worcester point out in their book, Blown to Bits, what the internet does best is dissolve value chains. The first signs of this came in 1995 when an entrepreneur named Craig Newmark started an email list featuring events in San Francisco. The year later, the email service morphed into a website, craigslist.com, which allowed people to post free classified ads while charging for job and property listings. From its original base in San Francisco, Craigslist steadily expanded to cater for an increasing number of cities, initially in the continental United States, but later on in many other countries. Whenever it opened a site for a new location, the impact of Craigslist on newspaper revenues was dramatic. By 2004, for example, it was reckoned that it was siphoning off between 50 and $65 million in advertising revenues from print publications in the San Francisco Bay Area alone. In the Bay Area, wrote Bob Cothorne, a former media executive at the San Francisco Chronicle, if you're looking for a job, a house, an apartment, anything to put in the house or apartment, basically if you want anything a classified market can provide, you don't need to go anywhere but Craigslist. Indeed, if you want to be sure you're seen at all, you must use Craigslist and you can pretty safely ignore the print newspapers, I'm quotes. This is a former print executive. Now, of course, the success of Craigslist was partly due to the fact that personal listings were free, as was access to the site. But it was more to do with the fact that classified advertising simply works much better on the web. Instead of having to wade through pages of density printed ads in nine point type, looking for that used car or that chic studio department in a certain neighborhood, all you have to do is type a query in a search box and bingo, there's a list of possibilities. Given that, the surprising thing about the web is not that it's so radically undermined the economics of print newspapers, but that they didn't see it coming. They failed to perceive the real threat to their value chain. My last case study is Wikipedia. Once upon a time, an encyclopedia was a shelf full of bound printed volumes. For most of my life, the goal standard in the encyclopedia business was the encyclopedia Britannica, which in its 15th, 1974 edition, ran to 30 volumes and required its own special bookcase. A few months ago, while working on my book in the wonderful book blind reading room of the University Library in Cambridge, I found myself suddenly needing to check a date. In the normal course of events, I would open my laptop and go on to Google. But on a whim, I decided to do something different. Must be a copy of Britannica here, I thought. So why don't I use that? So I left my laptop on opened and went looking. I found what I was seeking, sitting in a long, serried row of brown volumes on a shelf in the corner of the reading room. And as I took one of them down, I had one of those quiet moments of revelation that James Joyce called an epiphany. It was clear, first of all, that this edition of Britannica hadn't been used much. Indeed, one can tell from the condition of the volumes that most of them had more or less never been opened. The biggest shock for me came from realizing how dated the entries were and how narrow the coverage was. And suddenly, I realized how much the world of encyclopedic knowledge has changed and how much Britannica belongs to the past. The prime agent of this change is a development that nobody except perhaps Douglas Adams would have predicted. It was an online encyclopedia collectively produced and edited by amateurs who have created what is effectively the greatest reference work the world has yet produced. I've just checked the Wikipedia site. It's supporting that the English language edition currently has 3.71 million articles. The German edition has 1.28. There are 1.15 million in the French edition. 0.84 million in Italian and roughly the same number in Polish. There are editions in Chinese, Japanese, Portuguese, Dutch, Russian, Swedish, and even Esperanto. Wikipedia has about 360 million readers at the moment and is the seventh most visited site on the web behind Google, Facebook, YouTube, Yahoo, Baidu, and Windows Live. All of these are internet properties run by vast corporations with annual budgets running into billions of dollars. In contrast, the Wikimedia Foundation, which is Wikipedia's parent organization, had about 40 employees in mid-2010 and an annual budget of less than a million dollars. I don't need to tell you, this audience, that Wikipedia's coverage is astonishing in breadth, depth, timeliness, and sometimes triviality. There are, for example, detailed articles on TV reality shows, extensive entries on the Nintendo, Pokemon video games franchise, and on the Barbie doll, Barbie fashion doll, created by the Metallic Corporation. But at the same time, there are detailed, well-informed, up-to-date entries on the internet protocol suite, the economist and philosopher Friedrich Hayek, the Bose, Einstein, Condescent, Chaos Theory, Jacques Lacan, the celebrated French psychoanalyst, there's ones on Lloyd George and Ludwig Wittgenstein, and so on. The list is, or appears to be, endless. And if you search for a topic on which Wikipedia does not at present have an entry, you're presented with an invitation to create one. Now, this fact, this fact alone, that anyone can create or edit a page, is what usually gets the skeptical juices flowing, especially in academia. For most people, it suggests a recipe for chaos. To me, it reminds me of the Bumblebee, which according to the laws of aerodynamics, ought not to be able to fly. But fly, it does. What's the same, I'd be said, of Wikipedia, ought not to work, and yet it does. Over the years, since it was founded, the site has grown not just in size, but also in intellectual stature, to the point where it's become indispensable for many routine purposes and essential for some important ones. Could I just have a short hand here of people who have not cited Wikipedia in something they've written? Anybody here? There are a few, yeah, a few, okay. Thank you. But the thing that often strikes me about it is whenever a major catastrophe strikes in the world. For example, on the 6th of April, 2009, a large earthquake struck central Italy at 132 GMT. By 220 GMT, there was a Wikipedia entry about it, which was endlessly updated during succeeding hours and days. Same thing happened with the Indian Ocean tsunami of 26 September 2004 in the Japanese earthquake and tsunami of 11 March 2011. In each of these cases, where a conventional news organization struggled to cope, Wikipedia turned out to be the best source of information and explanation through the day. However one looks at it, Wikipedia is an astonishing phenomenon. Three aspects of it stand out for me. The first is the astonishing amount of collective human effort that it represents. In his cognitive surplus, the cultural commentator, Shurki, makes a stab at quantifying it. Suppose Shurki writes, we consider the total amount of time people are spent on it as a kind of unit, every edit made to every article, and every argument about those edits for every language that Wikipedia exists in. That would represent something like 100 million hours of human thought. It's the back of the envelope calculation, he says, but it's the right order of magnitude. The intriguing thing for me about this massive collective investment of time and effort is that it is voluntary. People create and edit Wikipedia for motives that have nothing to do with the more conventional motivations of human action. This is what Yochai Benkler called social production. That is to say creative activity that lies outside the marketplace. And in that sense, it's another instance of the rise of user-generated content that has been enabled by the web. But there is one important difference. Other kinds of user-generated content, for example, blogging, publishing, photographs on Flickr or home-produced movies on YouTube, are mostly motivated by desires for individual self-expression. Whereas the effort that goes into Wikipedia is directed at a collective goal, that of creating the greatest work of reference that has ever existed. The second thing that's fascinating about Wikipedia is why such an ostensibly uncontrolled and free enterprise has not degenerated into chaos. The answer lies in a combination of social conventions and technology. The site has remarkably few formal rules, but it does have a philosophical ethos that clearly translates into norms that govern behavior. This ethos includes a commitment to transparency about editorial decision-making, a guiding principle that contributors and editors should strive to adopt what Wikipedia calls a neutral point of view, a commitment to the notion that there should be no gatekeepers as far as content is concerned, and a pragmatic design philosophy that emphasizes the value of creating something that works now and assumes that problems can be dealt with as they arise. Without the right technology, however, this philosophical ethos might have been aspirational rather than operational. What made it practical was the adoption of the Wiki software developed by an American programmer, Ward Cunningham, in 1994-95. Cunningham's great idea was to invent a way that enabled web pages, which had hitherto been read-only objects published on servers, to be edited on the fly by readers in their web browsers. And the adoption of this technology by Wikipedia's founders in 2001 was arguably the most important decision they ever made. Why? Because it enabled the site to implement its philosophical ethos, as it were. Editorial transparency was ensured by the fact that the software automatically created and maintained a discussion page alongside every main page. This allowed people to explain their edits and allowed those who disagreed to argue back in public. Controversial edges that were made without any corresponding explanation on the discussion page could be reversed on the grounds of the absence of an explanation provided prima facie evidence for being suspicious of the changes. The discussion page provided a channel for debate that also introduced potential contributors to the social norms governing editing, and thus may have eased their transition from being mere readers to becoming editors. And Wiki software also provided an insurance policy against what many of the skeptics thought would be Wikipedia's greatest weaknesses, its vulnerability to vandalism. Because the program keeps track of every single change and logs all activity on the site, it made it trivial to undo damage by simply reverting to the previous version of the defaced page. In that sense, the technology dramatically lowered the cost of dealing with the vandalism and boosted the self-correcting capacity of the site. Now, in their different ways, these three case studies I've talked about, online music, newspapers, and Wikipedia, illustrate how we are so often blindsided by our conventions, mindsets, paradigms, and business models. In each of these cases, the technological future had indeed arrived, but those most affected by it couldn't see that. There are none so blind, the biblical saying goes, as those who will not see. Record company executives failed to recognize that the internet rendered compact disks and the business models built around them obsolete. And when they eventually did recognize it, they saw it as a mortal threat rather than as the commercial opportunity of a generation. Newspaper journalists and managers spent so much time and attention focusing on the strengths and weaknesses of online news that they failed to see that the real attack was coming from a completely different direction. And a mindset which couldn't conceive a non-hierarchical way of creating an authoritative reference work couldn't take Wikipedia seriously until it was too late. Now, there's an obvious objection to this line of thought. It's too smug, based as it is on the fact that hindsight is the only exact science. Life, you say, can only be understood backwards, but it has to be lived forwards. And it's easy to see with the benefit of hindsight how misguided the record labels were when they rejected the potential of the net and left the gap open first for an abster and then for iTunes. The problem with decision-making, you say, in the real world is that it has to be done in the face of an unknowable future. And that's true. So let's go at this problem from another perspective by recognizing that even if we are more perceptive than the music executives and newspaper proprietors of the 1990s, we still have to recognize that making predictions about technology is finished difficult. There are two main reasons for this. The first is that most major technological developments are, in fact, the outcome of very complex combinations of factors. In that sense, the popular image of technological progress as the output of a gifted inventor, like, say, Thomas Edison, is a very poor guide to contemporary technology. The writer who's expressed this best in recent years is the economist and complexity theory, Brian Arthur. In his 2009 book, The Nature of Technology, What It Is and How It Evolves, he sets out the view that a technology is what he calls an assemblage of practices and components and that the process by which technology develops is what he calls combinatorial evolution. What he means by that is that novel technologies evolve through combinations of existing technologies. If you open up a jet engine, he writes, you find components inside, compressors, turbines, combustion systems. If you open up other technologies that existed before the jet engine, you find some of the same components. Inside electrical power generating systems of the early 20th century were turbines and combustion systems. Inside industrial blower units of the same period were compressors. Technologies inherit parts from the technologies that preceded them. So putting such parts together, combining them, must have a great deal to do with how technologies come into being. Technologies somehow must come into being as fresh combinations of what already exists, unquote. Among other things, Arthur's theory of technology explains why the pace of technological change is accelerating. In earlier times, there was simply fewer practices and fewer components to combine. But now we have a seemingly infinite number of technologies to work with and they can be combined in a very large number of ways. And every time a new technology emerges, the possible or potential combinations multiply in a highly nonlinear, unpredictable way. If this is true for jet engines, nasty, heavy, lumbering, physical monsters, then it's true in spades for information technologies and especially for software, which after all exists only as weightless bits. And if you add in the fact that many years ago we developed a network, the internet, whose main purpose in life is to enable combinations of and interactions between software entities, then you can see why the process of combinatorial evolution postulated by Professor Arthur is so powerful and so terribly unpredictable in the IT area, which is why technological forecasting is such a nightmare. The second complication is that at least in the case of IT products and services, users rule okay. By that I mean that in the end, it's users who determine the pace of technologies. So the intrinsic merits of a particular service or program or device are not, its intrinsic merits are not what will determine whether it thrives. It can be the coolest piece of technology ever and still bomb. What matters is whether people adopt and use it. That's not to say that a technology which isn't adopted is always a waste of effort and resources because it may be combined in some unexpected way in the future to form a future technology. So in that sense, for example, the Apple Newton which was originally deemed a failure and indeed terminated by Steve Jobs when he returned to Apple in 1996 probably shaped the thinking that eventually produced the iPad. But in general, if a technology isn't adopted by users, then it might as well not have existed and users will only adopt things that meet their needs. Now, does this eternal truth that users will only adopt things that meet their needs make technological prediction easier? And the answer is, sometimes it does. For example, from the outset, it was absolutely clear that the mobile phone would be a world-shaping product because it satisfied a deep-seated human need for communication. And besides, there was always something weird about the and primitive about the notion that telephones had to be tethered to the wall, like goats. And of course, mobile phones had been in Star Trek for generations. Facebook took off because it satisfied the deep-seated need of American undergraduates to get laid. Napster grew like wildfire because people have a deep love of music and Napster made songs instantly and freely available. iTunes grew for much the same reasons except that it levied a charge. The Sony Walkman and later the Apple iPod met a deep-seated need of teenagers to be antisocial. Blogger exploded because it provided non-technical users with an outlet for their ideas, thoughts, and ravings. Flickr likewise with photographs and YouTube for video and movie clips. So it's tempting therefore to say that the most astute predictions about consumer technology will be based on assessments of the extent to which a particular technology meets a human need. And just when I had reached that conclusion, along came Twitter. And what human need, I wondered, could conceivably be met by a service that allowed you a measly 140 characters. Well, now we know. And so does our esteemed clueless Prime Minister, which is why he'd like to shut it down. If you have been, thank you for listening. Well, we have a bit of time. We have about 15 minutes for questions. And I'm sure your heads are buzzing with those three examples and what followed. So is there a comment or a question? We have a microphone here. James Clay from Gloucestershire College. I think one of the things about Wikipedia is in the past it was a useful tool. It was a community-created tool. But the growth and the power of the Wikipedia's, as they call themselves, who are very male, and it's a very male-dominated process, means that if you try and go in there and create new articles or add corrections or add additions, the Wikipedia's in their power, and James Wales, the bloke behind Wikipedia, recognizes this as well, go in and kind of revert everything back. It can be very difficult for new people to come in. Does this mean that the age of Wikipedia is over and where do we go from here? Thank you for the question. First of all, that's not my experience. I need a break. Thank you. Thank you for the question. That's not my experience of Wikipedia, but I've heard the criticism. I mean, the short answer is that nothing lasts forever. And the biggest mistake that we have made in relation to the kind of developments that we've seen in this area has always been to assume that the thing that worries us or obsesses us now and seems huge is going to be the thing we have to worry about forever. I remember the time when Microsoft was a serious threat. I think Google is a serious threat now, but will I think that in five years' time? I don't know. Same is true of Facebook. Wikipedia might well turn out to be a transient, too. It's impossible to say. It's Mark Johnson from University of Bolton. I take the theme of what you've just said to be the importance of listening and the importance of, obviously, the organisations in the case study listening. But there's something that worries me about technology and listening. And because it's always important, there's a level of sort of authenticity that you have to achieve if you're really going to think about what's happening and reading the world. And actually, when we look at our students and sometimes when we sort of look at our own behaviours with our technology, sometimes it's difficult to reach that same level of authenticity and sort of deep listening that's necessary to make the right decisions and the right choices. So my question is, is social technology potentially a threat to the way that we listen and particularly the way that we come together because there's something that happens when we come together in very convivial settings, perhaps, like this conference? I think those questions are kind of unanswerable in a way because the thing that sort of upsets me a lot about the public discussion of all of this stuff is that somehow it's possible to reach a conclusion about whether or not this thing is good or whether this thing is bad. And the truth of the matter in almost all of these things is that it is both good and bad. And that's the ultimate sort of reality about it. It's a bit like saying, is air good or bad? Whether or not it is beside the point. The point is what it implies, what it does to our societies, what it does sometimes to our brains or whatever. We're stuck with it. And so I find these discussions that often go on in the Daily Mail and elsewhere are really annoying about whether this is a good thing or a bad thing or whatever. Technology is always given and technology is always taken away. And the real question is how do we adjust to the realities that we have created? And those realities are very complex. And in some respects, it's pretty clear that we are sleepwalking into nightmares, absolutely. And the most distressing thing about it is how hard that is to explain to anybody, except for people like the people in this room who understand this stuff. But if you try and get a public discussion, for example, about the really warped way in which people are trading their identities for basically geegos, the way they've been turned by Facebook and by other agencies into essentially the modern equivalent of sharecroppers, all that kind of stuff. The way in which people will give away their personal details in much the way as in the child history books about colonialism, you had native chieftains who handed over mineral rights and returned for some baubles from some canning imperialist. All of that stuff is happening, but it's very hard to explain to anybody. Thank you. Tom Franklin, Franklin Consulting, and perhaps a slightly cheeky question, but given what you've been saying, either what is the next unmet need that technology is going to address, the next deep personal need, or what's the next big thing or few big things? Well, I've got a personal deep need, which is I want the technology to kind of invent plausible excuses for me. But the honest answer, as you well know, is that if you and I knew that, we'd be out there making it. I have no idea. I mean, the terrible thing about the future is that it is unpredictable. And long may it continue to be. Thank you, Diana Lorela. I can't really see you, John. It's like that. John, I've been trying to apply what you've been talking about to what our business of education, of teaching and learning. And one of the difficulties we have in teaching and learning is it's quite a complex transaction. There are quite complex human needs going on there, if indeed you can even call them human needs. I mean, there may be a human need to teach. And insofar as there is, it's fairly simply satisfied by things like PowerPoint or websites. Just give it, you know. Tell the story of my subject, and that's enough. There may be a human need to learn things we can not know of until we have learned them, which is what formal education is all about. But that's a pretty complex technology. It's not, certainly isn't Wikipedia, that doesn't do it. So what education has done is to appropriate everybody else's technologies for all the different facets that we need within the teaching and learning transaction. So is that our fate? I mean, can we see from the kinds of analogies you've been drawing, is there a human need to do something which there is a technology that can fulfill that need? Do we have to keep appropriating everybody else's? What's the implication for us in education? I wish I knew the answer. I mean, you and I have been in this wreck at a long time. And you will remember very well the way Tim O'Shea, one of our former colleagues, now a very eminent gentleman indeed, and has played a member of the establishment, but he was once a very interesting researcher and a colleague of Dan's. And he used to annoy conferences like this by saying that the only piece of education technology known for sure to work was the school bus. And much though I hate to admit it, I thought it was a pretty profound remark. I think the education thing is relevant in relation to my point about paying attention, because I really don't think that most of my academic colleagues have been paying much attention to this really. Paying attention in the sense of thinking, this might actually change the way I think about what I do, and this might actually change the way in which our institutions ought to be structured and work. And if you want a simple, and the reason for that is partly willful blindness and so on, but it's also partly because it raises very disturbing questions. Just look for a moment. If we believe, as many of us now do, that people learn better in socially mediated ways, if we believe that a very mature form of a very successful form of academic tuition consists of being a mentor to productive discussions in groups and the rest of it. If we believe all of that stuff, then we have a really big problem, because look at this room. Look at the way it's structured architecturally. It's basically structured along, a pedagogical model that says, there's a guy down here who's saying stuff and there's all you lot just looking at him. It's not architected, to use a horrible word, to implement a different kind of metaphor about how people learn and how people learn. So you have this terrible problem. If educators took this stuff, these ideas that come out of experiments with technology and other things seriously, then we'd have to say to ourselves, actually we're in the wrong place or it's just too expensive and too unthinkable to change. And that's the analogy I have for the record industry and for encyclopedia Britannica and all the others when they face with this stuff, is that first of all they couldn't see the need and second when it did arise, there was too much to be lost, too many kinds of worms to be opened in the restroom by acknowledging the truth of what was happening. And that for me is the metaphor really of the education system addressing this stuff. And I don't think there was a technological fix for it. I've got a question online here, John, that says, can you suggest how we can be curious, questioning and critical of emerging technologies and importantly how they relate to each other? Yeah, it's kind of something you could have picked up from Socrates. Basically, whenever anything new arrives, you ask first of all, what do we lose as well as what do we gain? And secondly, who benefits and who loses from the adoption of this stuff? That's the question that people haven't really asked about Facebook, for example. Who benefits really in the long run and who loses? And they're very old questions. I think Plato and Socrates and Cole asked them and I think they're still relevant and very useful. And when you ask them, the questions about technology are usually pretty sobering. The answers you get are pretty sobering. And there's one at the top as well. Bill Olivia, University of Bolton. You've itemised a number of industries that have been laid low, shall we say, by the internet. Music, newspapers, encyclopedias. And because they didn't see what was coming with the technology. It was already there, but just unevenly distributed. Now, it seems to be edging closer and closer to higher education. So do you see things that are already here that might well be things that we should be looking out for before they cast us down too? We're in the information business in one sense. The answer is, yeah. I mean, I think some aspects of this have been obvious to many people in the education business for a long time. And the best answer to it actually is a set of YouTube videos produced by Mike Wesh, which you probably know. And if you don't, you ought to see them. Mike Wesh is an anthropologist. He teaches anthropology. And he and his students have been making an astonishing series of really insightful videos, all of them on YouTube, about what we need to be paying attention to and aren't. And the answers that you get from those videos are much better than anything I could provide here. And I recommend them heartily. It's Mike W.S.C.H. Just Google them and you'll get it. And there was a lady there. Hello, Moira Mayly, University of Western Australia. Is there any way of bringing the vendors, the corporate people that are taking away the big profits, to bear for the consequences of what people do with their offerings? No. And the reason for that is that our societies are not geared up to addressing the problems that they create. If you just take a simple example, actually, is the way in which major content companies have consistently suborn legislatures into enacting copyright rules, which are now totally inappropriate for a digital environment. Why is that an interesting example? Well, in general, in democracies, when legislatures make laws, they do so on some rational basis. So for example, in the case of, say, the regulation of pharmaceutical drugs, then if a pharmaceutical company wishes to have some extension in its permissions, then a legislature will actually say, well, let's have some objective research here. And then we'll make a decision about what's in the public interest and what balances the public interest with your commercial needs and so on and so forth. Well, in relation to, say, the extension of copyright terms, what happens is that a number of pathetic figures like, say, Cliff Richard are dragged out from under whatever stone they hide in order to plead tearfully with legislatures that their descendants under the fifth generation will be penniless and not have shoes. If legislatures don't extend the copyright term even further and so on, nobody says, actually, what's in the public interest up to now? No legislatures have said, OK, until recently, no legislatures, especially in the United States, said, OK, what really is in the public interest? Do you really need more copyright extension, more protection? Will that lead to greater public benefit or not? And where's the evidence? So that's just one example of where we are nowhere near addressing these issues. The same thing applies in relation to privacy, say, and Facebook and Google and all that sort of thing. And it goes on. So they're getting away with murder, in my opinion, and they're getting away with murder partly because our democratic systems and our legislative systems are kind of, OK, years behind, decades behind, probably. And I don't know how this is going to pan out, but it doesn't look good. If you have a question. Nick Jeans from Sarah Consulting. I just wanted to ask if CDs are an inefficient and expensive way of distributing music. Can the same be said of teachers being inefficient and expensive way of distributing them? You know what? I don't think there's a Vice Chancellor in the country who hasn't asked himself that question. I mean, as a teacher, of course, I say absolutely not. But I mean, that's a problem that journalists have as well. For example, print journalists in particular have been fantastically hostile to the online world. And that's partly because they can't envisage a world without them. And what they don't seem to realise is that actually they're making a confusion between form and function. The thing that's important for our society, for example, is that we have the continuation of independent journalism. That's the function that society needs, journalism. And we have to find business models that will enable that to continue. But what journalists have been entirely preoccupied with recently is not how can we preserve the function in this new digital age, but how can we preserve a particularly obsolete technological format which was the printed paper. The future of journalism is not about the future of newspapers. The future of journalism is about finding business models that will support the things that are really valuable about journalism, and what it does. And I think the same is true for teaching institutions and learning institutions as well. Supposing our existing business model is vaporised by the net, what's important is not that we preserve institutions like this in their physical form, but that we preserve the thing that's really valuable about education. And that may or may not involve these kind of buildings or these kinds of institutions and so on. But that's what we ought to focus on. And in general, I don't think we are focusing much on that question. That's the really important thing. It's the what, not the how. Well, we are actually just about out of time. So I found this quite fascinating, actually. And I was all along thinking about this issue of teachers and whether they are a bit redundant, shall we say. But I suppose that will take us to next year. So thank you, John, that was absolutely a marvellous session. And talking about noticing, noticing change, I have noticed that there's a bottle of whiskey behind that sofa. Thank you very much.