 I get to do a very short, slightly embarrassing introduction of David Weinberg. There are a lot of David groupies around, so I don't think I have to do much in this respect. David Weinberg is going to talk about what information was a extension, I suspect, of lots of things he's been working on over a long and thoughtful career, spanning things as diverse as being a philosophy professor, being a co-teacher with me at the Harvard Law School in law, despite his only masquerading as a lawyer when he does it. He's really fabulous at it. But to me, where David's work is most important at the moment is the reconceptualization of information, how we relate to it. I can speak for my library colleagues. I know that when he came and spoke to us last year, he gave us a provocation based on everything is miscellaneous and this project that has hung with us since that time. And so I am always eager to know the next chapter in what David is going to write about and present and what I love about it is it's always something that makes me think and think and think for a long time after I read it or hear it. So I'm much looking forward to the latest installment of provocation from David Weinberg. Thank you, John. That wasn't all that embarrassing. I could have done much worse. Thank you. So unfortunately, I mean, I appreciate very much of the introduction and saying that you look forward to hearing the next thing I'm going to write. Unfortunately, I sort of look forward to that too and I'm not sure what it is. So this is a marker along that way maybe and it's way out of my comfort zone. Am I talking loudly enough, by the way? Because I hope not to stand. Not quite? I'll talk a little louder then. OK, so I am way out of my comfort zone. I've been interested for the past couple of years in a question that I'm going to sort of address but not in a packaged way, unfortunately. The question is, how did we become the Information Age? And the hypothesis is, which I think is a fairly safe one in this group, which is that we're coming out of the Information Age. We're into a new age and don't know what to call it at the moment. Don't much care, but we're coming out of one age and that's a very good time to look back at the previous paradigm, if I may use the term, almost technically correctly. So when you do that, what was information? And in particular, how did it affect how we looked at the world? So there's a huge amount written about the cultural history of information and some of it is just extraordinarily good and deep and grounded in history and grounded in thought and there's tons and tons of wonderful stuff. I'm going to pick on five different reasons to think that why information became the dominant metaphor, because that's really the central question. Why did it become? And so I'm going to look at just five different ways that occurred and I'm going to talk too much for the Berkman ethos, but I'm also going to try to go really, really quickly. So part one of this is simply to establish what is probably doesn't need establishing, which this has been a cradle to grave metaphor for us information. Absolutely and literally cradle to grave. I'm going to give you a quick example of the cradle end of the grave. The cradle is, if I were to stand up in front of you, which I will for a moment, and say, DNA is not information, you would probably think I am anti-science and an idiot and it would hit some of your buttons. But of course, because when we think about DNA, we think of it as coming labeled. It's got pairs of stuff that look like information. And of course, let me be very clear, information theory is a fantastic and crucial way of understanding this. I'm not saying nothing I'm going to say today do I mean to imply information and information theory isn't useful. It remains the foundation. Just as after the Stone Age ended, we still continue to use stone in the same way. We're going to count on there being information and gigantic machines that process it extremely efficiently and redundantly and securely and all the rest of that. We look at DNA, we see information and we draw very nice pictures of it. But in fact, this is DNA. DNA is a squiggly little molecule, a little twisted shoelace of a molecule. It's a physical thing. And so it's very helpful to consider it as information and to analyze it that way. But it's not information. It's a molecule. So that's cradle. Here's quickly on grave. Ray Kurzweil, who has done just wonderful things in his life, very important thinker, reading machine for the blind dyslexic. Nevertheless, his book, The Age of Spiritual Machines, from about 10 years ago, asks a question. It asks, when will we have machines that are large enough, computers that are large enough to enable us to model the 100 billion neurons of the brain and run the programs that run the brain? And at that point, Ray Kurzweil can pour his brain into the computer and survive his own death, which is really ultimately what this is about. Ray will live on absolutely forever. So long as there are backups, there's going to be a Ray Kurzweil. The very fact that we are willing to consider that the machine, that this proposition even makes sense to us, that a computer running a model of a brain is a person, is Ray Kurzweil, is perpetual life, is just the fact that this even makes sense to us indicates some type of the depth of our commitment to seeing ourselves as information. This is not just in the sciences. It's in philosophy, which traditionally has a number of terms that we've used when talking about how we know or epistemology. And there's a bunch of different ones, but these are the standard ones in the Western tradition. And in the 20th century, we added sense data, which is a pairing down of step up in abstraction of sensation. And since the 1950s, we have now routinely added information. We think about the brain and the mind. Let me be more specific. The mind has something that deals with information. And there's lots and lots written that just casually uses the word information as if it were a basic constituent of the human mind. This is a very new thing. So even our own conception of what it means to think is now based upon this metaphor of this paradigm of information. Lots of people, this is Stephen Wolfram, think that, in fact, the universe itself is made of information. And there are quantum information science people. This is his cellular automata number 110, which is a universal computer. There are people who do quantum information theory far over my head who will tell you that, in fact, it's the basic constituent of the universe, information itself. And I can't evaluate that. I'm not a quantum physicist. So the surprising thing is that even though everything in our culture, just about, we have reconsidered as consisting of information, sometimes very usefully, nevertheless, if we were to go around the room and do what we just did in introducing ourselves and try to answer the question, what is information, with the exception of the occasional computer scientist who I will not call on, we would not be able to answer the question. I'm quite confident that we would come up with many different ideas. There would be inconsistent, and we'd be nodding at the inconsistent one, saying, yes, that's right, too. We do not know what information is. We don't know what it is. We talk about it all the time, don't know. So this is really puzzling. Ronald Day, in a very interesting book, mentions that he has something like 200 different definitions of information. It's really interesting that the dominant paradigm up until now talks about this thing that is a constituent of the mind, of the universe, of everything, and we don't know what it is. We cannot define it, except for those of us who know the technical definition, which is precisely the one that we don't mean when we talk about information in all the different ways I've just mentioned. So it's a discontinuous information history is discontinuous. It's a new thing. And so a part of the talk I'm not going to give would be an argument that the normal canonical history of information that goes from the Jacquard loom that used punch cards in 1801 to do patterns in weaving up through Babbage. That's his machine up through Hollerith with the punch cards and IBM, to Turing, to Shannon. But that history is essentially entirely false. Well, entirely too strong. But it's an overstatement. It's reading back into the 19th century a concept and a thing that we didn't have. So rather than making that argument, I'm going to use Babbage, who many claim was an early information design, the first information processor, the first computer, starting around 1820, and just try to get from him the two ordinary uses of the term information that were pre-information theory, information theory which you're dating from 1948 in Claude Shannon. So before that, and even after that, we mean two things by information. And you can find both of these in Babbage's memoirs. It's a very entertaining memoir. He's a very trusty guy. So he uses the term information 28 times, Google Books. You can search the book 28 times. And the first time he's talking about. Is that one of the pages? No, it's all the pages because it's public domain. The first time he's talking about his attempt to raise the devil as a child to disprove some actually. Anyway, it's an interesting story I won't tell you. And so he had to ask his other schoolmates, how do you raise the devil? And so he gathered some information. And there's a very ordinary use of the term that continues in which information is simply something that you didn't know and now you do, like news. Second use of information, second sort of use in Babbage is also the second standard use that we have in ordinary English, which is he was asked by one of the railways to do some tests to determine the optimal width of tracks. So the cars wouldn't tip over on curves. And so he set up a device, lots and lots of paper, ink, and he generated tables of data. And that stuff that's in tables, that's information too. That's the old sense. It's also a continuing sense. And then this guy came along in 1948. This is Claude Shannon, who took over the term. He invented a new definition for it. What he was calling information was not called information generally before him. There is a technical, but basically that's right. He took that term, he invested it with a new meaning along with a bunch of other terms, such as noise. And he gave it a highly technical meaning. So he was at Bell Labs. He was trying to figure out, very important for Bell Labs, what's the capacity of a particular wire for telegraph or telegram, or any other medium? Because he abstracted it above any medium. So what's the capacity for carrying what we now call information? And he worked out a formula, which I would tell you about except I can't. People here could, but I can't, which is a mathematical expression of this, which this is from the first lines of chapter one. It's a teletype and telegraphy transmit information. So information seems to be something that moves through. It gets transmitted. And it's generally a discrete channel in the system whereby a sequence of choices from a finite set of elementary symbols can be transmitted from one point to another. So you're transmitting this set of chosen symbols out of some range of known symbols. There's a huge mathematics behind it. It's very difficult mathematics. It's not what anybody generally here means by information. It's not what the culture took up from this. So the history that, excuse me, I over-gestured. The history that starts with two simple definitions of information and then moves to information becoming the stuff of the universe and the stuff of the mind is actually quite discontinuous. There's a breaking point, the insertion of a new meaning of information. Before Shannon, the stuff that Shannon was talking about is information was called by his colleagues intelligence, by the way. So why did this happen? What enabled information to take over the world? And I want to be very careful here. A lot of the most interesting and important stuff that you'll read about this is about the deep utility of information. If you're using a hard drive, the science of hard drives for transferring information reliably into your computer is all based on information theory. So it's hugely useful. It lets us conquer a whole bunch of the world, which brings us to the second important point I'm not going to talk about, which is the politics of information. There's wonderful stuff written about this, about information as mastery, as information as a product of the war machine and information and gender. I'm not going to talk about any of that. It's really, really important. So I want to talk about five elements in light, in each case in light of how it looks after the end of the information age, how it looks to us now. So first point is that information scales. It allows corporations to grow larger than businesses used to be able to grow. We have a lot of machines that can manage that now. But the secret of the information age, which people at the time absolutely sense, and you see this in much of the cultural anxiety about computers, was that information worked by reducing the amount of information. So you stripped out everything about this person, except for the 8, 10, 15, 50 categories that were decided top down were going to be important. And people make good decisions about this. But it only worked as a strategy because we were able to strip out so much stuff and keep only enough that our computers were able to manage. So the information age is actually all about stripping out information. And if you look at an employee now, she's going to look much more like this, where in the post information age, the age of the link, it's just crammed with information, overflowing, bottom up connections outside of every box as much as we possibly can at an unheard of rate. So this is all about abundance in the new age, not about stripping out in order to manage scale. It's also in real language. In fact, the thing that links do in the new age is the opposite of what information did. Information stripped out, links are all about attaching to some new page, that's a page, and pulling it in, and expanding the universe. Obviously, what links do, they're completely expansive, never have any doubt or worry. Just pull it in and make your world bigger. And so we're now in an age of abundance, of course. When it comes to, there's lots of scarcities, but in the realm of creativity and works, we're in an age of abundance, which is an abundance of good stuff, but also an abundance of crap. I don't know if it's an irony or what, but we're actually far better able to manage the crap, the abundance of crap, than we are the abundance of the good. The abundance of crap, we manage through spam filters and all the rest of it. And we do a pretty good job of it. We're still using email most of us, even though there's so much spam, but we manage the bad stuff. The good stuff is the actual challenge to our culture, because we have institution after institution and conceptual framework after framework that assumes there's a scarcity of good ideas, a scarcity of good information. There's a scarcity of good creativity. And so if you can corner a little bit of that good stuff, you can monetize it and build an institution, whether it's newspapers or it's educational systems or it's health care or it's government, they're all premised on the idea that there's just not very much of it, so people will pay. And we have since learned, newspapers have learned, for example, that's just not the case. There's just an abundance. So you take the good op-ed writers and you put them behind a paywall at the New York Times as they did, and some people will pay. Nevertheless, the rest of us who don't, we don't sit around saying, I have nothing to do. The time when I used to spend reading the columnists I like, I now have nothing to do. No, there's so much good stuff, I'm never gonna get to all of it anyway. So the institutions that depend upon scarcity start to fail. In fact, what of course that means, as others have pointed out, that the information age was about separating the signal from noise, that's what got it started, with Shannon. Now, there's so much signal, the signal is noisy itself, except it's not, there's so much good stuff that the overwhelming amount of signal that can look like noise is in fact an abundance of riches. The signal noise model doesn't work real well when applied at the high edges, high ends of the web. Of course, this requires new ways of organizing, I'm not gonna talk about that. Second is information is a resource. Now I'm gonna try to go faster. So the way we think about information is that it's an external resource that we can delve into, we can query it, and we can fetch results. And that's great, that works obviously, there's a science to that, it's certainly not easy. But the web is a place, when we talk about it, we talk about it as a place that we can enter, it's got places and sites and we go in the web. So it feels a bit more, at least conceptually, like a place that we enter. The idea of entering the old sort of information, the information age information, if you were to say, well, we're gonna enter that, we did think about that during the information age in two ways, one is we did movies like Tron where we entered the information space and we became our own, what do you call it, avatars. And it was not a pleasant experience, we wanted to bust out of it. And the other is that the other way we thought about entering the information space during the information age was that it was going to engulf us, it was a threat, we were gonna find ourselves as Catherine Hepburn and desks at being confronted by the men with clipboards and their spewing realms of reams of paper. And that was also, that was hugely threatening. So the idea of entering an information space used to be just, we hated the idea of it. Now we routinely think about ourselves as entering the information space, the web space. And not only that, I mean, that's just, I think just a transitional mode, the two now are so integrated, the web space and the real world that we can't even bring our children out of this mixed environment because they're always in it and the two are completely fused. It's become the world. So, second point about this is that back in the old days, the information days, we used to have two terms that we would use to measure the effectiveness of a retrieval system which were precision and recall. All right, so precision is, these two terms mean, when you do a query, do you get back everything that meets the criteria and do you get back only? Do you get false hits as well? And that was how you measured. Over the past 15 years and the age of the web, as things scaled up beyond belief, a trillion pages, the search engines are now saying, we've had to add some new criteria for queries for doing searches which are relevant and interesting this. Because when you get back so much, that when there's such an abundance and you get back so many hits, the idea of worrying about, did I get everything? Who cares? You know, that's page 400 million at Google, you're never gonna do it anyway. So the idea of precision and recall is usually not, sometimes very important in a law library, for example, they're hugely important. But out in the gigantic world of the web, they lose some significance and instead, we have to put back into the system some criteria that are based upon the intersection that are not based solely on what's in the information system. Relevance has to do with whether it meets your needs, which means that your needs are now an integral part of this interchange and interesting this, which is a term from delicious and from the site, delicious and from Flickr. Also, obviously, are criteria that bring the idiosyncratic and the personal back into the system. The third part is the one I'm least happy with. We, oh darn, I went back too far. No, thank you. The third, why did information take over everything? Because bits, the basis of information in our modern conception of it, can apply to everything. Or they can apply to, there's nothing that escapes being bitified. Bateson, in the early days of information theory, said a bit is a difference that makes a difference. It's just a difference. And so we have this idea, we're perfectly at home talking about atoms versus bits. It sort of makes sense to us. We don't stand up and say, well, we said, no, we sort of know where you're going with that. Whereas, and a bit is simply a difference. That's all that it is. It measures some difference. We don't, on the other hand, feel comfortable saying, oh, atoms versus links. That doesn't make sense to us, right? We would say, what the hell are you talking about? What do you mean, atoms versus bits? We sort of think the two have some type of equivalence. And that's just odd. So the reason we have this sense of equivalence is that bits apply to anything. So they can be the cat is on the mat, or the cat is off the mat, or Kurzweil is in the machine, or Kurzweil is not a machine. That's a single bit. Or it could be a plane, or peanut M&Ms. Really doesn't matter. Each of those is a single difference, and thus a single bit. You put all these bits together, and so they can apply to anything. You put them together, it means we can model just about anything. We can make a coordinated set of bits, understand their relationship, and have a model that's co-extensive with the world. This is a remarkable property of bits. They apply to everything or almost everything. There are some counter examples. We can entertain them later, but basically applies to everything. But we can only do this because bits are really fundamentally not like atoms. An atom is an atom. I sort of believe in the real world, and that atoms exist. Whether I'm here or not, there are atoms and quanta and all the rest of that. So much for bits. That hole, those holes in that card, those are bits, but they're only bits because they're holes in a card that's being used in the system. As holes, they're not bits. The holes in this are not bits. Now you may be able to get a lot of information about the culture by considering the shoelace hole. Certainly you could, but it's not a bit. The colon is a hole. It's a long, long hole. It is not a bit. Not all holes are bits. Bits have to be in a system. A system that's highly regularized and standardized. If you just punched a hole randomly in the punch card, they wouldn't be bits. They would be noise at best. Second reason bits are not like atoms is, here's a dust cloud. It's way off, I don't know where. It's way off where, and there are actually 100 billion. I get to make up my example. So there are 100 billion motes of dust and they're all spinning counterclockwise. I'm sorry, not all. Some are spinning counter, some are spinning clockwise. Well, there are 100 billion neurons in Kurzweil's brain. And some of them are on and some of them are off. Some are active, some are not. We could model that in a computer. We could model the computer, the ons and offs, in dust, in dust spins. And so somewhere there's a cloud of 100 billion spinning motes of dust. And if we say the counterclockwise is on and clockwise is off, they exactly replicate Kurzweil's brain state during the 10 minutes when he saw his wife for the first time, his future wife, and fell in love. And so that dust cloud must be Kurzweil falling in love, according to the theory. But of course, that's absurd. It's not only absurd, it's self contradictory because it's my example, I could just as easily say, you know what, counterclockwise is a zero and clockwise is a one. And suddenly it's no longer Kurzweil falling in love. It both is and is not Kurzweil falling in love. Bits are not real. Bits are a construction. They have meaning because of the meaning that we give them. They are dependent upon human intervention and thus cannot be like atoms. Bits, just one last point in case this is not clear. I think I may be overselling, but we'll see. A bit, when you bitify the world and you turn the analog, the real world, and the continuous real world into bits, whether you're doing a map of a shore or you're measuring the voltage in a transistor to decide if it's an on or an off, it's a analog world so you have to decide what your resolution is going to be. Where are you going to say one bit flips and the other doesn't? For a satellite map, it's different than if you're two feet away from the shore and mapping each grain that way. Likewise for computers, there's an arbitrary decision made about what the level of voltage has to be for the transistor to be counted as on or off. And so bits are dependent upon our resolution, what resolution we're looking at, and that's dependent upon our purposes, what we're trying to do. Bits are not atoms. Bits are not, atoms are not that way. Compared to the web, bits are about reducing differences to the fewest possible states, generally two on or off. So you take something very complex and you build it out of an enormous number and a complexity of very simple objects, which is why one of the reasons they seem like atoms, but they simplify. The web is this web of links and almost every link comes with some language around it that says what the relationship is. Not mere difference, you don't just click and say there's a different page here, go there. You say, you know, you'll love this, you'll hate that, this amplifies, this detracts, whatever, but you explain what the link is, an indefinite number of possible sorts of relationships. The web has been in the past 15 years about building an enormously complex, intricate world as opposed to the desire of the information age, which is to simplify so we can do some very useful things. The web is, it seems to me, is mainly about this endless, endless complication, an abundance, but not just an abundance of symbols, an abundance of rich linguistic human intentions. Fourth, we're making our way through this. We've done very soon. It looks like, so why did information become a dominant metaphor? There's a huge amount written about communication theory. I am not a communication theory person. I'm gonna be very superficial. But I wanna get to a slightly different point. So Shannon, when he writes his paper in 1948, he called it a theory of communication because he's working for AT&T, it sort of made sense, but he was not thinking about communication in the way that we use it in ordinary life and the way in which information theory swept through. I mean, information theory became a much broader theory of information. He's very specific, right at the very beginning, Shannon, is to say, I am not talking about semantics. I don't care about the semantics. He's doing something else. But he put up, he put in this picture and this picture looks to us like a picture of communication. There's somebody, I'll say somebody, oh, that's fine. There's somebody with a message that gets encoded into, let's say, into a signal, let's say language maybe or all the other various forms of signaling that Judith Thoth will be happy to tell you about. Goes through some type of channel, words beating, their wings in the air, gets decoded and understood. It looks like a, this looks like communication. And in fact, if that wasn't enough, the guy that co-wrote, so Shannon wrote this paper and then it got published as a book with the first chapter by Warren Weaver, who was at the Rockefeller Foundation. Weaver's, this is the thing that people remember Weaver for. When you read his memoirs, it's like three lines in it. So he wrote this book a little bit casually perhaps, apparently, and right at the very beginning, this is the first words of his damn book, right? He says, the word communication will be used here in a very broad sense to include all the procedures by which one mind may affect another. This, of course, involves any names of them. And in case that's not enough, it says in some connections that may be desirable to use a still broader definition. This is exactly the opposite opening of the piece that it introduces. It says, very narrow sense of communication. So we have this endorsement in the canonical book. But still, you have to ask, this is a book very few people read. It's information theory. Why did we accept it? Why did this book and that vision have such a view of it? Why does this look like communication to us sufficiently that some guy we don't know named Warren Weaver says it? Why does this look like communication? And it's because, as many have pointed out, including Paul Edwards in a wonderful book, The Closed World, we have a conduit metaphor of communication, not from information theory before that. We have this notion that information is basically like, excuse me, communication is basically like tin cans. No message, message is conduit in between. So that picture looks like our previous idea of communication. But then you have to ask, well, why do we have that idea of communication? The notion of communication as me making sounds, vibrations in the air that's reaching to Ethan, who then blogs them apparently, there's such a weird idea of communication. This is communication as two people talking. And the very thing that they don't notice or really care about but enables it, of course, is the vibration of the air and the transfer of the message. But that's not what communication is about. It's all the other stuff. The world in which these people exist in which they share some concern. So you have to ask, why did communication look to us like tin cans so that when information theory came along that looked like it was explaining the tin can theory of communication, we said, ah, yes, now we got it. Now we know how communication works. And I hate to be trite, but I think it goes back to Descartes and the history that leads up to him, of course. Who was trying to explain? Trying to explain mind, body. How could the mind ever perceive something as different from it as the world, the physical world doesn't seem possible. And still, you know, people worry about that. So he solved the mind, mind, body problem. To probably say, this is not simply Descartes intervening. This is a long tradition in Western philosophy that culminates with him. He says, well, you know, obviously, we get mental pictures. Something happens. We get mental images and because, and we live in these mental images. We can't ever get to the real world, which sort of, our mind reconstructs it in terms of images and sounds and the rest of it. And if that's the case, first of all, it's a very lonely view of the world because it's you and your mental images and that's about it. But if that's what it is, then communication at best, if there are other people at all, which is, you know, maybe doubtful in this world too, then communication has to be transferring of an image from one person to another. That metaphysics makes sense of our theory of communication. That's why communication looks to us like the movement of a message and the reconstitution of an image in the head. It is a pathological metaphysics, though. This is strictly speaking, it's schizophrenic metaphysics. There's a much more natural way of talking about this, of communication, which we also, you know, it's totally obvious you'll accept this, I'm sure, which is to say, oh, two people talking, what's happening is, you know, they have some topic, they share a world. They're in the world together, yeah, that's pretty much right. And there's something about the world that matters to them and that maybe they don't see the same way, so they talk and they, oh, I can animate this in case it's not clear enough. So they're in a world, they share it, there's something that matters. The world is interesting and relevant to them. Two terms that have come up earlier in this talk. There's something that's interesting and relevant and so one talks to the other and they, this one maybe now sees the world differently than he did initially. And sure, there's messages going back and forth. That's a part of it. But those messages don't matter for anything. They're just vibrations in the air unless the rest of the stuff is there. And to have a picture of communication that strips out everything that's interesting and important about communication, like why we do it and the fundamentals of how it works, which is that we see the world differently by talking with one another and we see it differently because it matters to us. To have that image, the tin can image in your head as a theory of communication is just too bad. But that's where we've been, not because of information theory. Shannon was talking about two tin cans in a string. That was absolutely his issue and he tried to confine it to there. We leapt on to this because we already had this metaphysics in mind. So, content goes through a medium, ends up more content with a disruption of noise. In the interest of time, I'm gonna skip a really interesting little bit. I'll recommend the Paul Edwards book, which talks about the wartime studies of how to improve communication on a battlefield where things are booming, like literally booming, guns are going off. The way they came, the system they came up with is exactly parallel to Shannon's information theory that arose from the same milieu. And the only reason I wanna make that point is that it's a theory that if you start, if you're trying to understand communication and you start from the battlefield, which is the most extreme case of the inability to communicate, it is the hardest place in the world to communicate. It's like hell, it's as close as we get to hell. If that's where you start your theory of communication from, that's what you're trying to explain, then communication looks like it's a challenge that has to always be overcome. How the hell do we ever manage to communicate? If that's your theory, you're trying to figure out how we overcome the channel. That also gets represented in the information theory diagram that has noise intervening. That view of communication is based upon the failure, an example of the failure of communication. There's something that we do in our culture repeatedly. We explain things by looking at their failure as if that's especially revelatory and usually it's especially misleading. So content, medium content. Now we have hyperlinks, which are just weird in this regard. First of all, you can't figure out if they're content or medium. They're in fact both and the terms don't apply very well. Certainly part of the content of a page, but they're also the medium by which you go to the next page. So the distinction just isn't helpful here. This one that's so obvious in the information world. And second of all, hyperlinks are in some sense a path through an existing world. They assume communication. They assume it's just so easy to make the link. You're not being interrupted by, you're just hyperlinking. And yet they're not simply paths through an existing world, they're also generative paths. They make the world. This web world exists because people have made hyperlinks every time you make a new hyperlink to a new page, you are increasing the abundance of that world. In the age of links, we assume that communication is possible, not a challenge. We assume that there are conduits. Even as our conduits are creating the world that they move through. It's a model that just breaks entirely the information age understanding of communication. Finally, we can build in models of anything using bits just about. And we are all familiar with the various critiques of models, which are, you know, models are very useful, but we also know how disastrous they can be when they're misused, including a financial meltdown. Oh God. Save, save. It's footnoted. It is footnoted. Come back. I'm almost done. I'm really desperate. Just, you know. Go, go, go. Go, go. Okay. I'm ready for this. So, we're all familiar. Thank you. We're all familiar with the critiques of models. For example, the famous, one of the famous examples is Yucca Flats where the atomic waste is being disposed of and the EPA asked, required that before they would accept it as a place for atomic waste, they had to do a model that looked out 10,000 years to make sure that it would be safe storage. And sure, there's some sense in that, of course, because some things you can predict, maybe water tables and flows and things like that, but there's just so much that you can't predict, right? The dinosaurs didn't see that coming and neither does Yucca Flats. And so we model, in the modeling, we assume there's a regularity, a predictability, a denial of the contingent. I'm not saying no model. I'm saying there are limitations to them. We sometimes forget those limitations. And we forget another limitation of models, which is they're part of the information age. They are about excluding that which doesn't fit so that we can proceed and get the benefits of what does. And so they inherently deny the abundance of the world as well, the overflowing, uncapturable where words fail us, overflowing of the world. Finally, they're purely formal, right? It doesn't matter what you model it in. If you model it in swirling dust or in silicon, silicon's easier to use, but a model is a model. It's a purely formal abstraction. And that seems to leave out something too, like the body. It goes back to this, these really simple things. I just wanna say one thing more about them. A bit is a measurement, right? It's a measurement of a difference. Every other measurement that we have measures something, weight or length or I don't know, blood sugar or something. Everything measures something except for bits which merely measure pure difference. That's where they get their power. They apply to everything. They don't, and thus, the point is, this is great, you can model the world in bits. Because they're so abstract, but one of the key facts about the world is that the world never shows itself to us merely as a difference. It always shows itself to us in particular ways. And maybe that's because we have bodies and maybe it's a limitation, but it is how the world shows itself to us. It does not consist of mere differences, it consists of differences in things that we perceive as light and warmth and texture and all the other, no. The world always shows itself to us in particular ways. So ultimately, the power of bits, their pure formality comes from the fact and our ability to model the world comes from the fact that they are exactly how the world is not. So we have this model here and we've got that animated. Underneath it is noise. Noise is the interruption in the information system. It's the difficulty in the system. It's why we need information theory in the first place because we're trying to overcome noise and increase capacity. That noise is, in fact, the qualities of bits that are missing, the contingency, the abundance, the fact that it's not merely formal, that's the world. And in this diagram, noise is, in fact, how the world shows up. The world does show up in Shannon's diagram. It shows up as an interruption, as noise, as the problem with the system. This system, this abundant system that's beyond systemization, is all about the sorts of differences that the information world tried to exclude. But look like noise to the information age is, in fact, the world. And it's a world that's expressed through the differences that the web expresses with every link. So you can see, if you want to, the web has the revenge of the particular and the idiosyncratic after an age of bitifying and the enormous power of bitifying. Everything on this web is interesting to at least one person, because they put it there. Yeah, except for some stuff, but it's there because it was interesting, it was relevant, and they thought it would be interesting and relevant to someone else, maybe to just a few. It is a web of noise. Literally speaking, from the information age point of view, it is a web of noise, that's where it gets its strength. And we, oddly, we worry both about the fact that it's too fragmented, and so we won't have a joint culture, and at the same time, we worry that we don't appreciate the differences enough, that the fragments express differences, but that we stay within our shelves, and there are people who talk about this a great deal, and extremely cogently, from whom some of us have learned an enormous amount. I'm kind of warm in here. So the irony of this change from the web, from the age of information, the age of links in the web, is that if this is the return of noise, the world is now expressed as noise, it may be if Ethan and others are right, and I think that he is, that the fact of the problem is that this web is not noisy enough, we're not enough appreciative of its differences. So, thank you. What was interesting, but also some challenge about this talk is that you keep moving from all different levels of how you think about what information is. So, if you look at a word that's a bit too speaker of any measure, and you think that's right, you know that there's a piece of information that's in this sequence, that's the piece that comes next. He really did have a different piece of information, and if you're looking at, you know, for instance now, part of the language that seems like rapid novels, if some are highly potential, and some you look at a page, and part of the issue is, where did it play with that information of what the piece comes next? So it's something like a link, the fact that if you typically do something like, click on it, and the information of what comes next, should you follow this information, is a instructional bit of information about sequence, just like where a paragraph is, or how you think it is, or any other kind of form where you're like following it. If you have this symptom, click here, and go to the hospital now, follow that up, and I think that's separate from the fact that there's a computer instruction embedded in the HTML that makes them happen. So, first of all, you're absolutely correct, I bounce around levels all the time, and that's part of my slipperiness. It introduces error, so it's good to point it out, and specifically in specific places. In this case, I'm not thinking about the link in terms of the HTML underneath it, and underneath the HTML there's IP, and then there are packets and there are bits, and databases underneath all that often. I'm thinking about the link as a bit of language, and much of what I said about links, absolutely, it's just language, it's true of language itself. It's because links are usually linguistic, and almost always semantic, even if not linguistic, that they have the properties that I'm pointing to. And in some ways, the fundamental debate between the ages, I think, is actually one between information and language. So, yes, but I think I'm actually thinking about it differently than you are. So, there are many semantic gestures in the world, and many things, shoelaces carry information, not in the information theory sense, but they guide the... Did you answer my question by role? No, I wasn't, it's a very hard question. There's lots of signaling going on, in many, many different forms. You're sending the good colleague signal to Professor Donat here? Yes, I am, and I'm seated in the room, of all the people to be saying this to, it's you who have studied this for a long, long time. So, absolutely, there's a lot of signaling going on, there's a lot of information that is non-linguistic. Links obviously contain, as all of language does, a lot of non-linguistic information. Word order, just to take your example. Blue underlines for links. Nevertheless, links are usually in the web world, they are actual language, not always, but usually they're actual language. There's plenty of other information around them. So, remember, I'm not trying to generate an explanation of how the world works, I'm interested in the contrast of the two paradigms, and we look at the two paradigms, and think about links, very few of us think about the HTML or the packets underneath that. We think about blue underlined text that we can click on, in which case that model, the old model of, there's a medium, there's content and there's a medium, has, doesn't, we wouldn't come up with that model if we started with the link world, it doesn't fit very well in the link world, because the link is both content and tool and medium. David, can I try a totally different question than go to Salil, which is... I'm not so sure about that last one, but yeah. I'm giving you a, it's a warning that the computer scientist is a fact. Oh, God. So, I took, for instance, Qtrain, Manifesto, and everything at the Salil, other things you've done to be normative projects, a project where you're trying to describe a version of the world that you wish we would see, right, in Qtrain, it's the marketing people and everything at the Salil, it's librarians, and others, right? Is this, I heard this as mostly a descriptive project, that you're trying to create a theory that spins things out of what you've seen and what other people have said. Is there also a normative component here that you're hoping us to see, because I'm missing the, if it were... Oh, good. So, I don't know if you can hear it in the back, but it's a really good observation, which is, other stuff that I've written, and in general, my interest in life, is normative, that I'm a polemicist in some ways, which is not, you know... I was planning on asking JP's question as, so what? Which is a much, much more aggressive and less generous version than the one that Paul forgot about. You can answer either one. So, I choose to answer JP's, what a surprise. I'll try to address yours also. So, the answer to JP's question is, I actually, you know, I'm pretty sold on the web, I really like the web, but I'm not trying to, in this case, I'm not interested in proselytizing for the web. In fact, I'm trying to stay aware of and stay away from stuff that is too... If there's proselytizing going on, and there is a little bit, it's actually for a particular brand of philosophical outlook, not web versus information theory. In terms of the so what, there isn't any. I wish there were, I've tried to... So, this is stuff that I find interesting and relevant and I don't pretend that it's interesting and relevant to anybody else. I've tried to ask a question that I hope is leading, which is, information is weird. How did we ever go from a definition that none of us know, except for you, except for the computer science people, how did that sweep through the culture? What did it speak to in us? Because we embraced it. At the same time that we feared it, we nevertheless embraced it and redefined ourselves. How did that happen? And I don't have a comprehensive answer. I don't have a theory. I don't think I'm going to have a theory of how this happened. I just find it really, really interesting. So, no so what. Sorry. It's a narrative. I wish it were more of a narrative. There's so many really wonderful narratives written about this. Is it possible that the answer is, I don't know what the so what is yet? I don't think so. I don't even know what a so, what would a so what look like in this case? So, we should embrace the web. You know, that's sort of a done deal. I mean, David, I think you absolutely have a so what, but we should talk to the layers and Solis have his hand on us. So, you go and then... I'm really curious about the future. I'd love to know what the so what, sorry. I can't say so. Extremely stimulating for someone who is used to thinking of information mostly in the Shannon sense of the word, particularly the transition between the two ages. One thing that I am still trying to get my head around is the, I wouldn't call them criticisms, but the descriptions of, I guess, the mathematical model as being totally different from what people conceived of as information before, unless I misunderstood you. Before Shannon. Before Shannon. That there is a sense in which Shannon's notion of information is capturing the idea of learning something you did not know before. We can say that something that I did not know before, something I couldn't have predicted until receiving this piece of information. And a quantitative measure of how much I have learned I did not know before is what is the chance that I could have predicted this before being... Which is a rough measure of how informative we find a talk or a statement. If it's the stuff we already know, then it's not information, it's barely information. And Shannon is thinking about the degree of surprise in what we learned. So it does map to the previous understanding and current understanding of information as something we're about to learn. Oh, okay, okay, so and... No, this is very helpful. I miss, you know, in my rush, I mischaracterize this. And there's also a connection between Shannon's use of the word information and the other prior, recent prior, because information is medieval term, information that's in the table, right? Because obviously that's how we have mapped and described databases. Explicitly, his tables of information. So there's a reason why he took the term information. His quantification of it is so beyond the understanding of ordinary mortals that I think it's still a question why somebody who took this term and defined it in 40 pages of equations, why that term suddenly in 10 years leapt through the culture. So you're absolutely right that his redefinition of it does, there's a reason why he took the term information. There's a reason why he took the term noise and channel and signal. All of these people have done wonderful work tracing the etymologies into Shannon of these terms. And they're sort of good marketing term and entropy, which is a famous example. Because he does talk about entropy and his formula maps to the physical entropy. And the great story that turns out not to be true is that von Neumann said, why don't you call it entropy? Because then nobody will understand you. It turns out that probably is not a true story. Nevertheless, each of the terms does have some, it made sense, it wasn't truly, oh, oh, okay. Formation, because if you consider a link is everything has worked the same before. You probably will have problems make things work together. So you make, and even to make links work together. Here I'm thinking of what's called the principle of separation of concerns or modularity. Which divides things into layers and help us to separate what works on what layer. And I bring a little bit, what separation of concerns means from a paper that's called on the role of scientific thoughts. Techniques for effective ordering of one's thoughts that I know of. This is what I mean by focusing one's attention upon some aspect. It does not mean ignoring the other aspects. It's just doing justice to the fact that from this aspect point of view, the other is irrelevant. It's being one, the multiple track-minded simultaneously. So I'm working with these concepts to work on innovation theory and other steps. But how do you apply this? Because if one thing, if links is everything, you probably cannot make them work together if you think also of interoperability issues. So how do you make it work? So just give me an example of what you mean by an interoperability issue. Cause I'm not sure what you mean by working together. So for example, if you think about TCPIP, the web content, these are layers that make the internet work. But if you say that link means everything, as you were saying before, it's not just a piece of information. It's not just a tool, but information is also knowledge. How are you gonna program things to work in each layer? The layers of the stack? Yeah, for example, it's done. That works. I mean, it doesn't matter what I say about hyperlinks. That's what's gonna keep on working. I don't mean to say that links are everything and everything must be reconsidered as links. I'm not, in fact. That would be a big new theory, David. That would be a big new theory. That's actually, so that question, what is everything? Is equivalent to the... It's because the ladies told us before. I was gonna get there. If you were to ask that in the information age, the rough answer, the answer that you, what it means to say that an age has a paradigm is to say, well, rough, very roughly, everything is information. And that's the extreme view of the information age. But you know, that's why it's the paradigm. The atomic age would sort of parallel, consecutive. Everything is atoms. If you were to ask that question, what age are we in now? That would, there would be a long argument. I would not advocate for saying it's the age of links, I don't think. I would, it seems more likely to me that it's an age of the network that we are reconceiving everything just about as a network from government to marketing to our own bodies. And I'm not advocating for this, I'm just saying this is how paradigms work. And it seems to me that maybe it's the network paradigm now that's beginning to change how we think about things. Helped by the ecological paradigm that came before. Because those two things start to be very close. So how do you, how do you just keep the stack working? Yeah, it's a technical issue. Geniuses have figured it out, kind of keep on going. How do you make sense of a network? You know, that's the discussion of information, age, view of knowledge, versus our new view of knowledge and understanding. Which is a, which is an important discussion and it actually is sort of the theme of everything is miscellaneous. So I'm gonna not go down my miscellaneous path. In the back and then to the him. All right, but. Help me. You asked an interesting question at the start about what enabled information to take over the world? And you offered two possible answers, it's utility and it's politics. And then you stated that the talk was going to avoid the second of the two. But then it struck me that throughout the rest of your talk, it seemed like you were very much talking about the second in somewhat other ways. For example, when you talk about information only, it only, things only become information through human involvement. Things change depending on what resolution we're talking at and human purposes. All of these are, as you probably can guess, these are all components of the second question, not the first. So my question to you is, is what is the distinction that you're making between us or rather can you make it a distinction? So I was trying to absolutely right and one of the ways of looking at a lot of this, the counter to the information age is the reinvestment of the human, including and most importantly, the fact that the world matters to us. The hidden philosophy that I'm referring to, I hesitate to bring up is Heidegger's because especially since he's now been redeclared to Nazi, which of course he was, he was a Nazi prick. I will hesitate to bring him up. The answer to, by the way, Carolina's question about what is information is for me to take a Dickenshtinian dodge and say it's actually, you know, it's a language game, it's a family resemblance and dodge the question entirely. The power, what I was trying to bracket was an important in fact, perhaps central discussion that people in this area have, which is about the sort of not the, merely the investment into back into this of the realm of human concerns and what matters to us and what we are trying to do and all of which involve power and politics. I was trying to bracket out the discussion of, for example, gender and the very important strand that traces this back to the origins of information theory in the military. Cybernetics and information theory. There's a very important Cold War history here that's rooted in the military. So I was just trying to bracket that out so that somebody wouldn't say, oh, you know what, you forgot the important role that, but I absolutely, yes, I think that you can view this as a power discussion all the way through. Maybe, let's bundle up a few questions and then let you have a last word. Does that make sense? Yes. So I'll introduce her to you. Did anybody else want to add a third question into the pile? I've been waiting. All right, Fernando, great. So it's whatever we call these ages, the ages being the overarching paradigm, what we call the next thing that happens. I think you've done a great job of finding how do we get to where we are? It seems though that links are only one aspect of the changes. I mean, it's a branch, it's no more linear thought. Now we've got branching thought. Now networks make sense. Pycalculus makes sense. There may be other things that are 10, 15 years down the road, so that 50 years down the road, whatever we call this age, the post-information age, maybe call it in the meantime, do we have a better sense of that, right? Okay, I'm going to bundle. Yeah. Bundle. There's a lot of information. Which the point of which is that I think, normally, perhaps your focus ends up in one. I think, you know, you're looking a little bit for where these connections are. I think it's that information really is an abstraction, but it gives, it has a lot of properties that become created technologies like the web or telephone telegraph. Or even, I mean, Mark Houser's book on animal communication might be really interesting because he looked at simply the information carrying capacity of different evolved forms. So there's some level of communication of viruses we have. There's a type of information we have between them that sort of technically, they have certain limitations that humans in the age of the web don't have. We focus in some on the transformational properties of the particular communication technologies. Both get some of the normative pieces you may be interested into, and let you not necessarily be in an argument about what's in here. All right. Last one. Yeah, you mentioned before, Kate's saying that when she defines information as a difference that makes a difference, and there's like a similar take on that definition, which is saying that information is actually a distinction that makes a distinction. Because when you talk about distinctions, actually, you're talking about somebody making a distinction. When you talk about differences, it seems that they're given, they're in the world, and so they're kind of unaffected by human agency. So I was wondering whether, actually, part of what you were trying to do, maybe without you knowing, is inserting that part of human agency into the idea of information, because what Weaver does in the example that you gave about the history of information theory, is basically apply something that was not designed for human beings into the general realm of human communication. While it seems that what you're actually trying to do is exactly the opposite, and apply human agency into this very kind of cold realm of information theory and bits. So I wonder whether that's part of the... Three fabulous questions, and I'll be done at some point in 2011, answering them, so. Wow, really hard. So I don't know what, in 50 years, net could be dead and could all be holographic, neurological implant, whatever, whatever, and that's the metaphor. Who knows? Don't know. It does seem to me that we are out of one age, it's not clear, what we do is grow our way forward. Second, Judith, you and I will have a longer conversation because I don't think that viruses, they well may communicate. I accept that they communicate. I don't think that they're communicating information. I think it's a back reading of our modern sense of information, just as I would argue that Babbage was not creating an information machine and that a card looms are not information-based, that the holes in his cards are not information. An argument that I probably would lose, by the way, that my argument is that... If with Judith, yes. I would argue that that's a back reading into history, that information doesn't apply there. In the same way, I don't think that it applies to a virus. So I could add some qualifiers to that, but I think that's where our difference is. And I desperately want to stay away from communication theory because I'll never know enough about it and it's too hard. Do you think the question can have discussions separate from that model? Yes. Yes, I would love to have a discussion. And Fernando, one of the reasons I like the Bates and Quote is that I read it the way that you're reading the distinction. That's the difference that makes a difference. Well, to whom? You don't just, you know, it already has the human bit, so to speak, the human element inserted in it. And it's a very nice observation that that Weaver is sort of moving out into the world with the information theory and as many others have done, right? I'm interested in the human element that's there, but we culturally have often denied. We act as if information is add-ons, it's an appendent of us, it's things that viruses did before we were even born and that I think is, I am interested in pointing out the human element baked into the distinction that makes a difference, a difference that makes a difference, so. Oh, thank you, thank you very much. Thank you, David. Thank you. Thank you.