 First question. Thank you. I turned it on. I'm okay. Am I audible? Good. Okay. Whether I say anything useful, that's a different question. It makes me nervous when everybody claps when you get up because it makes you feel like you should just sit down because it won't get any better than that. And considering this old fart in a three-piece suit showing up, you must be wondering, do I have anything useful to say? I don't know the answer to that, but I don't think you're armed with rotten tomatoes, at least I hope not. What I'd like to try to do in about 45 minutes is try to persuade you that the internet that you're using today deserves some serious, let's say, evolution. And it's not too late. The original design was done, if you go back far enough, around 1973. Bob and I wrote the first papers on this subject. It's evolved substantially in terms of implementation architecturally. It's still pretty much the way it was before, and I feel like we missed a bunch of opportunities to say nothing of the fact that as the net has evolved and propagated, we've discovered a lot of serious problems related to security, for example. So I'm going to take you back for a little bit into history and then go from there. This is the predecessor to the internet. It was the ARPANET. It only had four nodes to begin with. I was a programmer at UCLA and wrote the software to connect the Sigma 7 machine to the first ARPANET imp that was installed at UCLA. The Sigma 7 is in a museum now, and some people think I should be there too. But if you fast forward past the installation of TCP IP in January of 1983 and so on, you see an internet that looks sort of like this. This was generated automatically by looking at the BGP routing tables and then using a different color for each autonomous system to try to show what connectivity was. I show this partly to show that it got a lot bigger, but the most important thing is that it's a grand collaboration because there's no central authority for the internet. It's built out of pieces of networks that people decided they wanted to interconnect because it was useful, and I think that notion of collaborative interconnection is just as important today as it was in the early days of the design. The number of hosts on the machine has gone up over time. It's well past 750 million now. The actual numbers are who knows, but the estimates are based on machines that have domain names and fixed IP addresses. That doesn't count in things that are episodically connected like laptops or desktops or mobiles or netbooks or other kinds of things, and it also doesn't count the machines that are hiding behind firewalls that we can't see because they're enterprise systems. So probably there are more like a billion, a billion and a half, maybe more devices that are connected at one time or another on the net, and certainly on the order of 2 billion users, which is still kind of small considering that there's almost 7 billion people in the world. So as the Google Chief Internet Evangelist, I feel like I have about 80% of the world to convert, so I have a long ways to go before we get there. The other thing that's interesting, of course, is the existence of mobiles. They've penetrated dramatically into the telecom environment, and some fraction of them, maybe 15 or 20%, are internet-enabled, and with time I think more and more of them will be. So they play also an important role in this landscape of the internet. The users are distributed approximately this way, and for those of us in North America, it's a little stunning to realize that there are more Chinese on the net than there are Americans, even though in the very early days of internet we had a very large population, relatively speaking. So the numbers, of course, in Asia will just get bigger as the penetration rates go up, so they will be in the billions, I assume, before the end of this decade. One of the things I wanted to draw your attention to is what Bob Kahn had in mind when the internet was first thought of. We were thinking in terms of military requirements for command and control, the use of computers to manage military resources and to take advantage of computing power in order to overcome a larger opponent with better control of your resources. So these notions that Bob had begun developing even before we started the project, he was thinking of this when he went to the Advanced Research Projects Agency in late 1972, and you see, I'm letting you read these, I don't need to repeat them, but you can see in this list things that are quite familiar today, the notion of distinct networks that interconnect independently, best efforts, communication, black boxes, which we used to call gateways until Cisco explained to us they should be called routers, and there was no global control. It was very distributed in order to avoid a central weakness. We also needed global addressing because the networks that we were interconnecting didn't know that they were not the only network in the world, and so there was no, and we had a rule that said don't change any of the networks, just let them run and carry packets enclosed in their packet formats, whatever they were. We needed ways to recover from lost packets because some of the networks were inherently lossy, they were radio-based, when Ethernet came along it had its own characteristics, satellite communication had longer delays than terrestrial, so there were a lot of variations in the parameter space and we needed to accommodate for all of those things. We had different operating systems, we didn't have Linux, we didn't have Unix at the time when this got started, so there were a large number of different operating systems that had to be adapted to use the Internet protocols. One thing that I think I've learned in the last several decades is that it was important that we didn't have a particular application in mind for the Internet. We hoped that it would be useful for a wide variety of purposes and the reason that turned out to be important is that we didn't build any assumptions into the net or into its protocols or its architecture that assumed a particular set of applications that had to be supported. The utility of that is that people, many of you, have figured out new applications to use this best efforts communication system for and so it's been able to adapt to new technology and new applications without too much difficulty. The layered structure we inherited from the original ARPANET design and it has proved to be quite useful because it segregates functionality and when you make changes in implementation within one layer as long as the interfaces stay very much constant you can make all kinds of different implementation choices inside without affecting the layers above and below. One thing I really love is that the IP packets not only don't know how they're being carried but they don't know what they're carrying. All it is is a bag of bits and they're only asked to deliver something from point A to point B with some probability greater than zero. That's all that we asked of an Internet packet and everything else sits on top of that. I'm also rather proud of the fact that when we designed the Internet addressing structure we did not use a country-based system. Part of the rationale for that was that the military could not know ahead of time where it might be in operation and it didn't make any sense in a military situation to have to go get permission from some country that you were attacking in order to get address space to run. This is going to have to be purely topologically based and have nothing to do with national boundaries. Okay, openness and I'm sure are strong proponents and understanders of that. Openness has really been important in this Internet story. The open source material and the Linux and other systems like Chrome and Android are I think important. Open access is important being able to get to the network, being able to get to anywhere on the network. The open standards where literally anyone with a good idea has an opportunity to inject that idea into the architecture. I do remember though around the late 1980s when it was all government sponsored I thought that at some point we needed to make a commercial engine out of the Internet because I couldn't figure out why the government would pay for every individual's access to the Internet. So I thought commercialization was important. Some of my colleagues thought that was a dumb idea because after all that was their toy. This was their sandbox. Why would you want these commercial greedy people to become part of the equation but that's why the Internet grew because there was a commercial engine underneath it. And of course broadband is a big deal in here, especially in Australia. Chip of the hat to the broadband plan, which I understand is underway. So IPv6 you all know that we're almost out of V4 address space. I'm a little embarrassed about that because I was the guy that decided 32 bits was enough for the Internet experiment. My only defense is that that choice was made in 1977 and I thought it was an experiment. The problem is the experiment didn't end and so here we are. So if you're not doing V6 you should be. Domain names are coming and have arrived actually with non-Latin characters and that's an important addition. Domain name system security is another addition also very important. RPKI in order to do a better job of protecting from people who are squatting on or hijacking address space in the backbone routing structure is another addition which is underway. And of course we're seeing sensor nets and smart appliances, the smart grid in the US and I'm sure in similar things in Japan and Europe and mobile devices are all part of this growing architecture. I think that you're likely to hear an announcement that Diana has exhausted its address space very soon. And as soon as we're down to five slash eights then each of those, one each of the last five will go to each of the regional Internet registries. I'm pretty sure we'll hear about that very soon. We also need to work very hard to get IPv6 up and running. And the time for just talking about it is over. We just have to get busy and implement it and demonstrate it. So that's actually 6-8-11 now instead of 6-6-11 for a kind of a world IPv6 day. Google is going to be very active in that and I hope some of you will participate as well. These are some examples of the internationalized domain names that have been approved by ICANN. Here's some more. Oh, yes, that one is one that I didn't have the character set for. This is a good example. Let's see. That's Sinhala. I didn't happen to have Sinhala on my machine. So you can blame Apple for that. I don't know about clean diamond. We have to work on that. All right, so we have security problems. You live with them every day. There's a list of things that we should be worried about and we are worried about. I think the thing which I am most disturbed by is that some of these problems are not just technical. They're our behaviors. We pick bad passwords. Some people still pick password as their passwords. Others pick words that are easily broken with dictionary attacks and things like that. I am a very big proponent these days of two-factor authentication of graphically generated passwords that only last for a short period of time. Google has adopted that internally and I think we're also hoping to make that available publicly to people who want more security in their access to our services. Social engineering is still a very common way of getting penetrating systems. Fishing and farming, we hope, will be able to reduce some of that using DNSSEC. Address poaching, all of these things. The bottom line here is that the worst things that happen in the net have very little to do with deliberate security penetration. It has a lot to do with dumb mistakes that we all make. Some of the worst are things like configuration errors. It's hard to figure out that something has misconfigured me. If there's a parameter out of scope, that's easy. But if there's some constellation of values that would cause half the net to disappear into a black hole, sometimes it isn't obvious that that's what you just did. I remember we had a little event at Google where as we crawl the net and index all the websites, our software looks for malware. If it thinks there's malware on the site, it makes a little mark in a table. When anyone else happens to go to Google Search and find a site that has one of these malware marks on it, and they try to go there by clicking on the link, we pop up an interstitial page saying maybe you shouldn't go there. We think there's malware that would harm your computer. So the guys that were doing that were manually doing some editing of this thing and somebody stuck a slash in at the wrong place and it caused every site on the internet to be marked as having malware. So we discovered that fairly quickly because people were saying everything is infected. So the most spectacular mistakes, I think, are the ones that we do to ourselves. You all appreciate that the security problems in the system come about in part from operating systems that are easily penetrated. One hopes that the openness of the Linux environment or Android or Chrome or some of these other operating systems will contribute to eliminating a lot of the potential weaknesses in those systems. But the biggest hole, I think, right now for security is in the browser space because the browsers in the past weren't really threatened much. When you think about what they did, they would go and download the homepage and interpret it and most of what they were interpreting was formatting information from HTML and also imagery and maybe at some point some streaming elements. But now, of course, we download JavaScript or Java or Python or some other high-level language and then run the program in an interpreter inside the browser. And for some browsers, unable to figure out that they're doing anything bad will run those programs and lodge Trojan horses or other kinds of things into the operating system, partly because the browser is operating at too high a level of privilege within that operating system setting. So we have work to do, I think, to improve the framework in which we allow web-based applications to run. Of course, there are lots of other things that can cause serious problems. All the botnets that are used to generate spam or denial of service attacks are really a consequence of the penetration of a lot of operating systems by way of drive-by downloads. So here, I think we all have a kind of collective responsibility to think our way through better operating systems and browsers and the like that will reduce, if not eliminate, a lot of those problems. Privacy is a big issue, too. And some of it is just the result of users' choices. They just put information up on the net that they don't seem to recognize might be damaging later on. But also there are people who don't bother configuring things properly and the consequence of that, or they can't figure out how. To be fair, some of the user interfaces to configurations are not so simple. But there's also policy issues here. It's not all technology that causes privacy to be a problem. Sometimes a business like a telephone company will naturally accumulate information like what numbers did you call, when did you call, how long were you on the line. And all that gets accumulated for billing purposes, but it also is potentially quite private. And so most businesses theoretically treat that information as private and they'll protect it. But if they choose not to, then your privacy is harmed and it's not because of technology. It's because of a decision made by a company that's accumulated the information. So companies like Google and others who have information that could be considered private have a responsibility to protect that information and to not share it or not to abuse it. There are also some, what we call them, invasive devices. We walk around with mobiles today. They have cameras in them. We've all become reporters in a very funny sense. We take pictures. We upload them into the net. We do sound recordings. And in the millions, in the hundreds of millions, they upload these to YouTube and other storage sites that are made accessible. We do GPS tracking. All of these things are for our convenience, but at the same time they have potential privacy implications. And I think we're living in a world where it's going to be increasingly difficult to protect privacy if you're interested in that. Scott McNeely, the former head of Sun Microsystems, was quoted almost a decade ago, I think, as saying, there isn't any privacy, get over it. I'm not sure that, I hope he's not exactly right, but I have to say that we live in a world where that's difficult. I want to shift gears for just a second and mention clouds because I feel as if we are at the state in the cloud world now where we were in the Internet world around 1973. What do I mean by that? Well, we have many different cloud implementations for different sources, whether it's Amazon or Google or Microsoft or IBM and so on. They aren't built the same way. They don't have all the same functionality. In our case, we have multiple data centers. They all have to be interconnected to each other. They're very attractive because of the dynamics, the ability to share resources. One nice thing is that we do replicate data in the Google case so that even if a data center goes away, it's possible to get access to your information because we replicated it deliberately in order to protect it. Or you may be working with others on documents that you're interacting over like a spreadsheet or document, text document, and you could simultaneously be doing video or audio conferencing. So all these things are very attractive, but each of the clouds for the moment is independent of each other cloud, just like the networks of the past were independent of each other. And I've often thought, well, gee, what if you had data in cloud A and you decided that it would be beneficial to either replicate or move the data into cloud B? Probably not a good idea to have to download all that into your laptop and then push it back to the other cloud. For one thing, there might be too much data to do that in a convenient way. How do I get cloud A and cloud B to talk to each other? What if it turns out that the data that's in cloud A has some access control associated with it that's important to me? I need to replicate the metadata in cloud B that will give me the same access control that I had in cloud A. This is presuming that I have semantics that are comparable in the two clouds. We don't have any standards for describing any of that. We don't have any way of telling cloud A and cloud B to cooperate with each other in the conduct of a common computation which might involve data sharing. None of the vocabulary which is grown up around the internet for this remote peer-to-peer interaction has been developed for clouds yet. And so if you're looking for a dissertation topic, this is one of them, this exploration of how to get clouds to interact with each other. But there are other research problems that haven't been solved and in this part of the talk what I'd like to do is to persuade you that not only do we have unfinished work before us, but that it's possible to do that even though this internet has been around for quite a long time. Security we've already talked about and plainly there's lots of work to be done there. Serious work and operating system design including within the Linux context I think is called for. We don't have very good formulas. If those of you who have studied traditional telecommunications will know about this guy Erlon who figured out that a typical telephone call was three minutes in a bell shaped curve not counting teenagers. And actually that's a cheap throw away. It turns out that today's teenagers don't talk to each other, they text. They don't want to talk to each other because it's too tense. You don't know what to say next and the conversation falls apart and it's embarrassing so they don't like to talk to each other in the phone. They just send text messages back and forth. But anyway Erlon was able to measure the behavior of people on the telephone system because there's only one thing you could do, make a phone call. Well in the internet we don't have that luxury. What happens is that tomorrow somebody will invent yet another way to use the internet it'll have different statistics at the edges of the net than we had before. So there are no Erlon formulas to help us plan the scaling and implementation of the internet at the edge. In the core it's a different story. When you're aggregating large amounts of flow the law of large numbers actually helps you. But at the edges of the net the dynamic range of behavior is still extreme. And of course we have all these screaming and yelling matches about quality of service whether we need it or not or just add more capacity that debate is going to go on for a long time. Distributed algorithms which is something that we'll be talking about in one of the many conferences is another place where a lot of effort is still needed to take advantage of clouds that can support concurrent computations in ways that we couldn't do with a simple ordinary single processor. I'm not going to go through every single one of these but the place where I really get upset is in mobility generally multi-homing, multi-path routing and broadcast. I made an absolutely awful mistake. I mean I don't mean to take all the blame. I had colleagues who were participating in the design of the internet. But really this was in the split in 1977, 1977 in the split between TCP and IP. That split was made in order to provide for real-time delivery of data that didn't all have to get there. So speech, radar tracking, all the kinds of things where freshness was more important than getting everything there in sequence. Whereas TCP was working really hard to make sure that you could retransmit, get rid of duplicates and do all those other things. So here's the problem. When we made the split, I thought it was very clever to create what we called a pseudo header and bind the TCP connections closely to the IP addresses of the underlying IP layer because we'd save header space and we didn't have to invent yet another address space for the TCP layer. That turned out to be a mistake. And the reason it's a mistake, if you haven't already figured that out, is that it bound the higher level protocols and applications to the IP address of the machine that happened to be connected to the net at the time. Now we could be forgiven, I guess, considering that when that decision was made around 1977, most machines didn't get up and move around. They were the size of two or three rooms and they required air conditioning and everything else and cables all over everywhere. But as we have moved to the point where our computing goes with us, then our access to the internet has changed and our IP address access moves with us or it does not move with us, it changes as we move around. But as it does with the mobile telephone network, those telephone numbers are no longer what they used to be. They used to be things that said exactly where you were in this physically switched network. Today they're just a label and there are some underlying routing identifiers that figure out how to re-bind your telephone number to the underlying routing system as you roam from one service provider to another. So we could do that in the internet architecture. We could segregate the address space for TCP layer and up from the address space of IP. Then the problem will be how to cope with the guy that says, hi, I'm in a new IP address now but I'm the same guy you were talking to before. Of course that's obviously a kind of a penetration attack. So you'd have to invent some sort of handshaking, probably with a cryptographic element to it, in order to prove that you're the same guy that used to be on a different IP address on this TCP connection or FTP or what have you. But I think that it's worth exploring those sorts of things. The IETF has some groups looking at shims and other sorts of techniques that would allow this sort of re-binding of the higher level applications to different IP addresses. That would solve the multi-homing problem too if you have multiple ISPs that deliver different IP addresses to you. You could use any of them because you'd be binding streams together in a higher layer than just the IP address. In the multi-path routing case, the way routing typically works in the net, you pick a path and you use it until it doesn't work anymore. It would be nice if there were multiple paths that you could push packets on all of them in order to get a higher capacity from edge to edge, but we don't do that. And finally the thing that really drives me crazy, we take broadcast radio capability and we turn it into a point-to-point link and you think about Wi-Fi and other things. We actually could make use of the fact that a broadcast could be received by multiple parties. That's what satellite television is about. That's what cable television is about. It's not just about video. I don't mean to narrowly focus this. It's really about being able to deliver the same thing to a large number of receivers at the same time. It's a very inexpensive way of delivering large amounts of data if everybody wants the same thing. Not everybody wants the same thing, but some large number of people may want the same thing like the latest software update for Linux or possibly a video or some other piece of information or program of some kind. So I imagine having satellite services that are raining internet packets down on 100 million receivers so as to do very efficient delivery of things that are popular and if you miss a couple of packets, you holler and you get a unicast update in order to recover from that. So again, once again, I see no problem actually implementing something like that. There are satellites in the sky that have a big footprint. They could easily be generating or at least relaying internet packets as opposed to what they do today. And I'm surprised that this hasn't already emerged as a business. We talked a little bit about authentication and I would say that we have some distance to go to do a better job of authenticating everybody. We need standards and we need things that are internationally recognized as strong authenticators for parties that are transacting on the network. Multicore processors are an interesting problem space because Moore's law broke a few years ago. We aren't increasing the clock speed anymore every 18 months. Instead, we're increasing the number of cores that are on each chip. And that's all fine. You still get the same large increase of compute cycles that are available. The problem is you have to use them in parallel better than you can before. I'm not going to take any more time on that because I'm going to talk about that in another one of the small conferences. And I'm going to skip over delay and deception tolerance and pick it up when we talk about the interplanetary internet, which I'll try to finish up with. This other thing on the right-hand side, governance of the internet, is a gigantic quagmire. The folks who live here in Australia are living a piece of that right now. Well, that's interesting. Does that mean I should stop now? That's pretty impressive. Anyway, folks here in Australia are living in one piece of this debate. It's been proposed that somehow the internet gets censored in order to protect people from things that they shouldn't see. My reaction to this is that doesn't sound like it's a very effective thing to do, especially if you're trying to hack DNS servers and things of that sort. I think we all appreciate that there are things that we might agree on a societal basis, on an international basis, that we would want to remove from the net. The best we can do is to remove it when we find it. We can't stop people from putting it up ahead of time. But I really think that this debate is going to go on forever as we see the system increasingly penetrating into every aspect of our lives. Then the societal issues are going to become more and more paramount in the debates. And I hope that we can preserve the openness and freedom of the internet, which has allowed so much permissionless innovation that allows people like you and me to try new ideas out. We've talked about mobiles in this. Skip over that. Performance is another huge problem space. And it gets harder and harder as the net gets bigger and bigger. I know if you're trying to do something on the net and it isn't happening at a reasonable amount of time, you sort of wonder, well, what broke? And if you have any knowledge like you do of how all the different things that could possibly go wrong are in the chain, I want a WTF button that I can push that sort of says, okay, let me see if I can figure out why you're not getting the service you expected. I think it's really hard to find ways of not only measuring, but also articulating and identifying or exposing performance problems in the net. And I think it's really hard to do that. So there's some good design we're waiting to happen. And with regard to addressing setting aside the V4 and the V6 transfer or transition, it's reasonable to ask questions about what other things should be identifiable or addressable in the net. And it's not obvious that we should stop thinking about addressing the interface to a computer. What about just a digital object that has been created with maybe it was a spreadsheet or a Word document or something else? Why couldn't it have an identifier? Of course you could say, well, what's wrong with the URL? And one answer is it depends on the domain name system and well, what's wrong with the domain name system? Well, that is not necessarily long-term. It will continue to be resolvable. So one might start asking, well, is there some other scheme I can use to identify objects in the internet that would have a longer lifetime that doesn't have the same potential brittleness of a domain name that is currently used in the URLs. We could talk about URLs as an alternative in the Web structure as a way to do that. I'm going to skip over policy right now because I'm more interested in, first of all, not running out of time and second, getting to a couple more technical points. Something that we do every day is the creation of complex objects. We use application software to build spreadsheets, to build complex Word documents, to build presentations in a variety of other things. And the files that of bits that those applications create are only as useful as our ability to apply the application to those files. So one of the things that I'm becoming increasingly worried about is that we invest a huge amount of effort in creating these digital objects. And then, if someday, the application software doesn't work anymore or isn't available that all the investment in the digital objects will evaporate. We'll just have a pile of rotten bits. So I've been calling this the bit rot problem. And it's more complicated than it looks. The typical analogy or metaphor I have in my head is that it's the year 3000 and I'm running Windows 3000, let's say, and I do a Google search and I turn up a 1997 PowerPoint file in my Google search. The question is, does Windows 3000 know how to interpret a thousand-year-old PowerPoint file? And the answer is probably no. And that's not a gratuitous dig at Microsoft. I think even if we had open source, it's not 100% clear that the open source functionality would be preserved for a thousand years so we can read these old digital objects. So I worry about this for a couple of reasons. First of all, if someone decides not to maintain any longer a particular application that you were dependent on and if maybe the operating system that that worked on becomes obsolete, then you're sort of out of luck because you can't run the application anymore. Open source kind of helps because we might be able to keep running those applications. But what if they're proprietary applications that we become accustomed to using and we've made investments in creating objects using those applications? And the company goes out of business. What happens to the intellectual property that went into that proprietary software? So as an example of the sort of thing that would be interesting would be to find a way to let cloud-based operation absorb this kind of application and make it accessible to everybody. Obviously, there's all kinds of intellectual property issues associated with that. Maybe you even have to preserve not only the application but the operating system version that it ran on. And once again, there will be more intellectual property issues. So in a way, what you're doing with Linux is helpful because you've created an environment where that itself may not be as much of a problem. But I am worried that we are not thinking our way through preserving our digital stuff. And 10, 20, 30 years from now or even 100 years from now people may wonder about the early 21st century because all of our stuff won't be interpretable anymore so we will all just be a big pile of rotten bits as far as they're concerned. So I don't know how to solve that problem except to chip away at some of the specifics. Now, I've been everyone has heard the term Internet of Things and I'm expecting to see an increasingly large number of devices on the net. I love the guy that made this Internet-enabled surfboard. He's in the Netherlands. I haven't met him, but I have this picture of him sitting on the water thinking, you know, if I had a laptop in my surfboard, I could be surfing the Internet while I'm waiting. Good man. So, I mentioned earlier that sensor nets are likely to be on the system. This is a little eye chart. It's a diagram of an IPv6 wireless sensor network running in the house. It's a commercial product from Arch Rock, which I guess was just acquired by Cisco. And it samples temperature, humidity, and light levels every five minutes in the house. And it records that in the server down in the basement. The wine cellar is a very important room in the house. I have to keep it below 60 degrees Fahrenheit. And if it goes beyond that temperature, I get an SMS in my mobile telling me, you know, your wine is warming up. That actually happened after I was away for several days and I kept getting messages every five minutes saying, you know, you're in trouble. So I asked the Arch Rock guys if they made remote actuators that I could go in and install. They said, yes, that's a project they need to do. But I can also tell whether anybody's gone into the wine cellar if the lights go on, that'll be recorded. But I don't know what they did in there. So in particular, I thought, well, maybe I should put RFID chips on the bottles. And then, you know, I could tell if anything leaves the wine cellar without my permission. But one of my friends was debugging the design for me and he said, well, you know, you can go into the wine cellar and drink the wine and leave the bottle. So now we're going to have to put the sensors in the cork. And if you're going to go to that trouble, we might as well be sampling the esters to figure out whether the wine is ready to drink. Before you open the bottle, you interrogate the cork. And, you know, if that's the bottle that got up to 90 degrees at some point, that's the wine you give to somebody who doesn't know the difference. So that's something practical about that. So the sensor nets are going to be everywhere. Oh, this smart grid stuff is taking off and we're going to see more. We'll be gathering data. You know, the buildings will know more about us and the environment and everything else. We'll be swimming in a sea of information. Of course, we all have to make sense of all that. The smart grid in the U.S. is moving along in that domain, too. I'm running a little over, but I'm going to finish up with this interplanetary internet stuff. Now, the last time I mentioned this, some people thought, okay, he's off his chump. And is he expecting to communicate with aliens? Or should I be worried about alien porn? You know. The problem is we can't even figure out, you know, did you see the ovipositor on that thing? So this is actually a serious piece of engineering. Any of you who get a kick out of taking what sounds like a crazy idea and actually making something work as an engineering project, I think we'll appreciate this. My colleagues at the Jet Propulsion Lab and I got together in 1998, we said, look, the networking of space right now is point-to-point radio links and that's not a very rich network. Can't we do better? Can't we create a networking environment that will allow us to have multiple spacecraft communicating with things on the ground, things are moving, maybe sensor networks that are sprayed across the landscape. And we said, can't we use TCPIP to do that? And of course the answer was if works okay on Mars, doesn't work okay between them. And there's a little problem, the speed of light is too slow. The distance between Earth and Mars varies from 35 million to 235 million miles. That's a variation of three and a half minutes to 20 minutes one way. Can you imagine writing a browser program? You know, you click on your mouse and it's 40 minutes before the first bit comes back. And I know you've got networks with those problems here, but that's not because of speed of light delay. So, and then there's this other problem, celestial motion. The planets are rotating. We haven't figured out how to stop that. So, when you're talking to something on the surface that rotates, you can't talk to it until it comes back around again. So there's delay and there's disruption and we concluded this is a great shot from the rovers. We concluded that we were going to have to build systems that have in them, in the architecture and in the protocols delay and disruption knowledge. So we did that. We developed a set of protocols. We call GTN type protocols. They not only been implemented, but we put them on the space station. We put them up on board the epoxy spacecraft that just rendezvoused with the Harley-2 Comet. We're experimenting with some prototype implementations on Android and we're hoping to persuade the consultative committee on space data systems, which is all the space and interplanetary nations, to adopt the use of these delay and disruption and network protocols in order to make standard a rich communication networking environment for space exploration, both manned and robotic. So, what we're hoping, frankly, is over a period of decades that we'll literally grow an interplanetary backbone because once a particular spacecraft has completed its primary mission, it can be repurposed to become part of a node of an interplanetary network. So we're actually, what is it telling me to do here? Possible dinner, right? We're hoping that we will actually grow an interplanetary backbone over time. I won't see the end of it, but it's been a lot of fun to see the beginning. Okay, we're going to do Q&A, but I need to warn you ahead of time that I'm hearing impaired. So, when you get a microphone to ask the question you're going to need to hold this little gadget with you. It is an FM transmitter and a microphone and I am, hang on, my hearing aids just turned off. I am the guy that came to talk and wouldn't listen. Theoretic, hello hello, testing? Oh, wait a minute, I had to turn the power on. This is where I run out of batteries. Testing, yes, okay. So, if you will hang on to this thing while you ask the question, that will help a lot. Okay? If you have any questions, if you have any questions, that's fine too. Hands up and we'll come to you with my okay, running, running. Okay, now make sure you return that little FM transmitter, it costs about $850 though. So, about the bit rot problem, at the moment there's a lot of data from, say, 2000 years ago that we no longer have because it rotted physically. So, is it likely that the same situation will happen with the bit rot problem, that we will lose lots of data, but that's going to be okay because we'll keep some? Well, actually I have, first of all I have to say that we are less at risk because of the media than we are because of the formats. The reason for that is that it should be possible to move bits from one medium to another. So, I'm more sanguine about that part of the problem, although I completely accept that somebody shows you a DVD and asks how long is that going to last, or how long will the reader of the DVD last? And, you know, you don't quite know the answer to that, and when the librarian comes and shows you a vellum manuscript that's a thousand years old that's still readable, you sort of cringe and think, boy, we have some work to do. I am more worried right now about being able to preserve our ability to interpret the bits than anything else. Okay, next question. I just wanted to ask you, talking about the TCPIP binding problem, it's an issue that we've seen come up especially with, as you said, you know, mobile, there's quite a lot of us that network engineers that know about it and etc. What's actually happening on a research level for that because I haven't really heard anything that's been going on globally to try to address that? Is there a consorted effort? Yes, there is. In fact there is an IEGF, at least more than possibly more than one IEGF working group looking at this problem. In particular, breaking IPv6 address space up into 264 bit pieces and introducing possibly a shim layer, I don't know, maybe some of you know the, remember the acronym for the working group, I've just gone out of my head, but you should be able to find that in the IEGF working groups, and I'd recommend that you have a look there because there's real progress being made. Wow, this is really hard, isn't it? This is part of the health plan for the folks who are here. So one of the things that has been recently in the news is Jim Getty's about buffer bloat and how it's affecting TCP congestion control. What are your thoughts on that? I missed one word, I'm sorry, I missed one word at the very beginning. It was something that was affecting the congestion control. Buffer bloat is affecting TPC congestion control? The upload? Oh, buffer. Oh, this is the buffer bloat. Oh, God, yes, Jim Getty's. Jim is in the process of writing a couple of specific articles about this. It's a huge problem. I don't know how many people know about buffer bloat that, yeah, so I don't need to tell you what it is. My reaction right now is that the only way we're going to fix this problem is to get people who make the devices that have these large scale buffers in them to artificially reduce their size. That's the only way we'll get rid of it. The problem is feedback loop. It takes too long to discover that there's a problem because we allow everything to fill up the buffers. So my reaction to this is that because memory got cheap people stuck buffers in because they thought that would help and in fact at some point it doesn't. I hope that Getty's and others are able to persuade people that they really need to re-engineer systems to not have more memory than is absolutely necessary. One funny thing, apropos of buffer bloat I'm sorry, where's the question coming? There you are, thank you. One interesting observation apropos of buffer bloat was there was actually advice given to all the conference attendees from the Australian networks here that because we're so far away in the undersea cable to increase the size of our buffers to make things work better. I was sort of amused by that. How many bad things have happened and somebody said I was only trying to help, right? We're good. Thank you. Keep me running. We have time for a couple more here. I'm not going to the gym this week. So when you are not presenting, what do you get to hack on? Okay, so that's a good question. One of the interplanetary stuff is one of my hobby horses. The rest of the time I'm running around trying to not so much to write any software which I haven't done in a while. It's to try to persuade people that they should want to be writing software to do various things which is part of the story here. Part of my time, I get to spend on university campuses in particular trying to help not only graduate students but their professors recognize that there are some serious hard problems that deserve attack that everyone would benefit from if they were solved so I spend more of my time being an evangelist to me. What else do you expect from that title? I didn't ask for that title by the way when they asked me what title I wanted. I said how about Archduke? You notice I didn't end up with that title and the reason, first they said it didn't fit with the nomenclature but the most important part was the previous Archduke was Ferdinand and he was assassinated in 1914 and it started World War I and it was a good title to have. Next question. With ICANN and the domain name system and the fact that they're now basically making it so you can do anything as the domain name isn't that going to cause more complication? The entire point of the domain name system is to make it simpler that you don't have to remember a TCP IP address and now all of a sudden we're going to have anything as a domain name. Isn't that going to add complication to the system? Well, it may certainly give you could remember today every possible domain name even forgetting the non-Latin ones. The more important observation which is going to sound very self-serving I think is that search is a really great way to find things and but what's important is that having searched and found having a domain name or something which is fixed that you can return to is absolutely essential otherwise email and other things wouldn't work so I think we have to rely on our computers to remember the specifics for us. I'm not sure how many people still try to guess domain names and type them in and maybe they do and it doesn't work and then they search so my guess is that's really where we're going to end up searching and remembering. I think we have time for one more. One more. In your opinion do you think Google would look favourably on any Google Luna XPrize team competing that might use the interplanetary internet protocol to communicate back here? So my honest answer is that I've been trying to persuade the guys that are funding that to provide a free interplanetary protocol, bundle protocol implementation just not require anybody to use it but make it available freely and I would love that so I'm not quite I don't think that there would be any special favours but I think it would help if we just got over that problem and made it available to everybody. Okay that's all the time we got. Thank you so much. Please welcome Dr. Vinton G. Surf. Thanks guys that's great. In recognition You know you would not be clapping if you knew that my next stop was the Barossa Valley and McLaren Vale that made the real reason for coming to Australia. To recognise Vint's contribution we have this bowl made of Queensland macadamia wood. Thank you very much. Thank you very much. Okay you've got somebody else coming up right now. Thank you Vint. Just one little reminder before we head off. Morning tea will be available outside now. Thank you. Before we head off to that I was a little bit disappointed yesterday that we only made it to the top trend in Twitter by about 1pm. I would have expected us to be there at least by about 11 o'clock so let's see if we can do a little bit better today for all those social freaks out there. Are we already there are we? Wonderful. Okay guys Thank you. Thank you Dr. Vint. Process this is I'm sorry I was looking up I'm going to get my glasses on here I won't be able to see. Wow. 0.006 type. Oh yes Shim Six is one of them. That's one of them yes. You're right, you're right. Hang on let me turn this off. There. And I need to take this off. You know how I do events right and I'm always told why you get the internet will be fine not why they'll need this kind of thing and they'll be fine I won't sort of be like this is how it's done Don't give me bullshit. Yeah TV games talk about walking around no, I didn't walk around walking around walking around walking around walking around thinking about walking around walking around walking around walking around walking around walking around walking around walking around walking around walking around walking around walking around walking around walking around walking around walking around walking around Okay. Can we just turn it off? Please! Please! No, I don't know. Okay. No, I don't know. Yeah. It's got no white spots. Yeah. No, no, no. No. Yeah. Ah. Ah. Ah. Ah. Ah. Ah. Ah. My brain is completely evaporated. No, no, no. No, no, no. No, no, no. I'm sorry. Yeah. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. Let's just turn it off. I have the next idea. I have the next idea. Stay on the screw. No, it's fine. No, no. That's the end by what you want. It's like, normally, I get hit. I will get hit. Yeah. I'm sorry. It's a bad way of getting you to work, because you're putting it up, as long as it works, you'll have to wear that. That's up to you then, okay? Yeah. I don't think that's as high as that last week. No? No. Six out of ten? No. No. No. No. No. No. No. No. No. No. No. No. No. No. No. No. No. No. No. No. No. No. No. No. What was your resolution of that? I think it's not 10.4. Is that correct? No. What are 2.4? 1, 2, 3, 4. 1, 2, 3, 4. 1, 2, 3, 4. 1, 2, 3, 4. 1, 2, 3, 4. 1, 2, 3, 4. 1, 2, 3, 4. 1, 2, 3, 4. 1, 2, 3. 1, 2, 3. 1, 2, 3, 4. 3, 4, 1. 1, 2, 3, 4. 1, 2, 3, 4. 1, 2, 3, 4. 1, 2, 3... ...just tell them what you think. Well I have one of the guys that said there was no broadband or something like that. Did you? I don't. This is a problem. That's the thing. Nothing. I think that we are having problems with, it's going to be a matter of trying to work on this guy. The funny thing though, is that, don't you get access point, don't you see that? They're going to find the right one, but I think it's not the right one. I didn't ask what you're trying to do. This is the one, if they're going to find the right one, they're going to do it. This is what? Yep. Yeah, from the font sizes on the strings, you sort of need to start it down a couple of inches, and then move it around a little bit. Probably you could double the font size again. No, that's fine. Yeah, it's still good about it. No, no, no, no, no, no. No matter how much you get used to it. Yeah, um, turn off transparency. See, this is what you can have it plugged into, and put it in your SGR mode. So the problem does not look like any other magic, just that it won't be in your SGR mode. Yeah, so I made one of those. The second fact, if you know what you're doing there, that's where I was doing it. Last year, I was doing it. I never knew how I was sticking with you. Well, here, one was one of those, one was one of those. Someone said, one of the problems was believing that you just wanted to tell you what to do. Someone said, one of the problems with your objectives is that you have to assume what your objectives are. That's actually why. Living scene? Because I have a manager who says, you are doing TN24, but send to TN16. Okay, next? Well, send to TN24, send to TN16. That actually is awesome. Okay, next. Thank you. Thank you. This one. It makes such a difference, too. Oh, that's excellent. That's mad. I was wondering how you got such a good photo. Oh, and the skill. It's all skill. Check, check, check. You're bent out. Check, check, check. Is this real? I suspect. Okay, this is going to be a pain, so I'll just hold it. Why? Or do I need to speak? I'm sorry. I'm sorry. Check, check, check. Is this real? I suspect. Okay, this is going to be a pain, so I'll just hold it. Why? Or do I need to speak? Okay, I can do that. I'll speak a little bit louder, and that should work. Awesome. Okay, that's okay. You can't mess around with it worse than I do. Outside later, in lunch. Is this one on? We'll start off in a minute or two, so welcome to the Sis Edmund MiniConf. If you're expecting a different MiniConf, you need to leave now. First up, a couple of announcements. First off, if you have a mobile phone, can you please put it on mute or turn it off? Should I stir up everyone else? If you're not on call, now it's a really good excuse to turn it off. We are ordering you to turn off your mobile phone. Please note anyone's manager. Okay. Secondly, our latest schedule is this online one here. There are some slight changes from the printed schedule. The main change from the printed schedule is the Samba talk, which was going to be this afternoon. It's now this morning. And the other thing is at 14.45, we have a 10-minute gap. We have about five minutes in there. If you have a very short lightning talk you want to squeeze in, please come up and discuss with myself or you. And that's probably about it, and we'll just lead on to the first talk. Steve Dass. Steve Dass talking about DevOps.