 First of all, thank you so much, Brian, for a very warm and welcoming greeting. I really appreciate the opportunity to be here. It's my first opportunity to come to Canberra. I had several reasons to want to do this. One, of course, was coming here to ANU. I had had the opportunity to listen to Brian speaking both at the Heidelberg Forum and also at the Lindau Forums for the Nobel Prize winners. To say nothing, they're sampling some of his wine, which I can report is an excellent product. So I was looking forward to coming here, not only to visit with you here, but also to spend some time at the ground station for the Deep Space Network, which I managed to spend a couple of hours at. That turns out to be quite relevant to some of the things I'm going to talk to you about tonight. I've been told that I have 40 minutes to do this, so please set your modems for 50 gigabits per second, because that's what we're about to do. I'm going to take you back in time to give you a little flavor for the story of the evolution of the Internet, give you some sense for some of the challenges that lie ahead, and then give you an update on the interplanetary extension of the Internet. So, whoops, well, I just blew down. There we go. This is a photograph that was taken on the 25th anniversary of the ARPANET, the predecessor to the Internet, and it was taken in the interior of a church, the Christian Science Church in Boston. So this is their map room, and somehow it just seemed like a dramatic backdrop for the picture. A number of people are still around, but a number of those people have passed away, including one of them just yesterday, Frank Hart, who's over in the far right in the front. So we are losing some of our giants, but many of them are still busy and active. This is a picture of the original four-node network called ARPANET. It was built for the Defense Advanced Research Projects Agency. The purpose behind this network was to provide a network to link computers that were in use at about a dozen universities where artificial intelligence and computer science were being studied on behalf of the Defense Department. The funding for this came from a man named J.C.R. Licklider who believed that networks could be used for more than just computation. He was interested in their use as communication tools, as systems for allowing collaboration among parties. Douglas Engelbart, who was the inventor of the mounts, inventor of what was called the online system at SRI International in the Menlo Park near the Bay Area, San Francisco Bay Area, was equally convinced that computers could be used to augment human capability. And those of you who are looking at headlines today reading about machine learning and artificial intelligence can appreciate the vision that these people had 30 or 40 years ago. So I was a graduate student at UCLA, and I wrote the software to connect the Sigma-7 computer to the interface message processor packet switch of the ARPANET. Sigma-7 is in a museum somewhere today, and some people think I should be there, too, but I'm here. So this is what a packet switch looked like in 1969. It was the size of a refrigerator. It was delivered to UCLA by a company called Bolt-Berenek & Newman in Cambridge, Massachusetts. And it was enclosed in a heavy metal container because they knew it was a military project and it was being delivered to an extremely hostile environment, a university filled with graduate students and undergraduates. And we understand that. So that was the size of a packet switch today. Today you carry something like that around in your pocket called a smart phone. To demonstrate the primitive nature of the ARPANET, my colleagues and I got together in 1994. It took us an entire day to set up this shot for Newsweek Magazine, but we had to find zucchinis and yellow squash and five-pound tins of coffee and string everything all together. But we put this together as a geek joke because if you look carefully at the network, you'll notice that it's mouth to mouth and ear to ear, that it's not ear to mouth, so this network would never work. So we figured that not too many geeks would notice this. I'm sorry to say that one of our John Postel passed away too early in 1998, but Steve Crocker and I both ended up working as a subsequent chairman of the Internet Corporation for Assigned Names and Numbers, which John Postel helped to start way back in 1998. We were doing this project for the Defense Department, and so I felt compelled to bring the military out to see what packet switching could do for them. This is a packet radio van. It was being driven around the Bayshore area, radiating packets in a mobile environment. Now you do this all the time with your mobile phones, but at that time this was a difficult thing. This is in the early 1970s. So I wanted the military to see that we were doing something that would be relevant to them. The inside of the packet radio van had packet radios here. These are cubic foot radios that cost $50,000 each. They ran at 100 kilobits and 400 kilobits a second, which at the time was very fast. I mean, today that's sort of a dial-up modem. We were doing experiments with packetized voice and packetized speech at that same time because they were going to use this for military command and control. We knew they would need voice data and video. In order to carry voice traffic in this packet switched environment, we had to compress the voice signal down from the normal 64 kilobits a second, 8,000 samples a second at 8 bits each to 1,800 bits a second so we could carry more than one voice conversation on a 50 kilobit or 100 kilobit channel. It turns out when you compress the voice down to 1,800 bits a second, it loses a certain amount of quality. And so anyone who spoke through the system sounded like a drunken Norwegian. Now by this time I'm in the Defense Department in Washington, D.C., and I have to demonstrate this system to a bunch of generals, and I'm trying to figure out how we're going to do this. And then I remembered that one of the participants in the packet voice program was from the Norwegian Defense Research Establishment. So we had Ingvar speak through the normal voice telephone network. Now we had him talk through our packet voice system and it sounded exactly the same. We just didn't tell the generals that everybody would sound that way. In 1974, Bon Kahn and I published a paper in IEEE Transactions on Communications on the detailed description of the Internet. A copy of this particular volume of transactions sold for $34,000 at an auction recently. So of course I went to my files to see if I had any more copies. So much for that retirement plan. I didn't find any more. This is not readable, I'm afraid, but it's a plaque that's at Stanford University and the reason that it exists there is that I had a team of people working on the ARPANET and the Internet during the earliest days. And the point I want to make is that this was not a purely American invention. It's true that Bob Kahn and I did the original design. But in my laboratory at Stanford University I had people from Norway, from Japan, from France, from Germany, from Italy participating in the design, implementation and testing of the system. And so this was a very international activity from the very beginning. In 1977, this is before the Internet became operational. I was concerned that the program needed to show that these protocols would work to connect multiple networks together. The ARPANET had shown how you could connect disparate computers, different brands of computers HP, digital, IBM together in a common homogeneous network. The Internet showed that we could do this over multiple and different types of packet switch nets. So we had a mobile packet radio net, we had the original ARPANET, and we had a packet satellite network in operation over the Atlantic, linking Europe and the U.S. And so in 1977 we had the packet radio van that you saw the picture of with all the military driving up and down the Bayshore Freeway, radiating packets through a gateway into the ARPANET, across the ARPANET to Europe, down to the University College London, through another gateway to the packet satellite network, and then back to the United States through the ARPANET again to USC Information Sciences Institute. Well, San Francisco and Los Angeles are about 400 miles apart, but the packets travel 100,000 miles because they went through two synchronous satellite hops back and forth across the U.S. and the Atlantic, and it worked. And I remember leaping around saying, it works, it works, like it couldn't possibly work. It's software, and when software works, it is a miracle. So this was a very important demonstration of the technology of the time. The National Science Foundation, the Department of Energy, and NASA all decided they wanted to be part of this growing network environment. NSF wanted to connect 3,000 universities around the United States, and so they built, or funded, and had built a National Science Foundation network, NSFNet, to link those universities together, and they cleverly arranged not to have all 3,000 connected to the same net. Instead, they had about a dozen intermediate networks which could connect to a bunch of different universities, and then those dozen networks would connect to the NSFNet backbone. The reason this was smart is that it meant that the backbone operator only had a dozen customers to worry about, not 3,000 of them, but it also illustrated how powerful the internet technology was, because it allowed this arbitrarily large number of networks to interconnect with each other as if they were all one uniform system. This is what the internet looks like now. It's bigger, it's global, it's very colorful. And the purpose behind this picture, the colors are intended to give you the sense that there are many, many different network operators running concurrently, and it's exactly true, there are half a million different operators of networks in the system. They all decide what hardware they're going to use, what software they're going to use, they decide who they're going to connect with under what terms and conditions, they decide what business model they're going to use, some are for-profits, some are non-profits, some are government-run. The point is this is an entirely distributed system. There isn't any central control. The only reason it works is that everybody adopted the same set of protocols called TCPIP. And so this is a good example of the power of networking. It's also a good example of the power of under-specification. In other words, we did the minimum we could get away with to make this architecture work because we didn't know at the time what all the applications might be. The consequence of this is that as new technologies have come along, we've been able to sweep them into the internet to support the movement of packets around the network. And as new applications have come along, the system continues to absorb and support those applications as speeds have gone up, as new ideas have come along, new platforms have become available like your mobile and laptops and tablets and the like. So the architecture is very layered. The internet protocol layer is the stupid layer because the packets don't know how they're being carried and don't care, and they don't know what they're carrying. So it's a little bit like postcards that don't know what you wrote on them. The postcards don't know how they're being carried. They don't know what's written on them. And the internet takes advantage of exactly that ignorance so that when we invent new transmission technology like optical fibers, we can sweep that in without change to the architecture. When somebody invents a new application, the packets of the internet and the internet itself don't have to change because the packets don't know what they're carrying. All the information about the application is at the edges of the net, whether it's in the cloud computing systems or the edges where your laptops, desktops and mobiles are. So this has allowed the internet to continue to evolve in a very dramatic way. Now, there's a lot of work that still should be done and could be done and I think will be done. I don't propose to go through in great detail every one of these bullets, partly because of the amount of time I have, but the thing I want to emphasize to you is that there is room for growth and evolution in this architecture. It is not stuck in the past, even though it was designed what, 45 years ago. Standards are super important. They come from many different agencies, as you can see in that list. And the reason standards are important is that they substitute for a bunch of bilateral negotiations to try to get pieces of technology to interwork. If everyone adopts the same standards, then you implicitly get interoperability out of that, assuming the standards are right. It's very important that the software that animates many of the devices on the network be reliable, especially considering that we're going to be relying on devices that we call the Internet of Things for virtually everything. We already rely very heavily on our mobiles, but you can imagine household appliances, security systems and the like to say nothing of self-driving cars. We also know that we don't know how to write software without bugs, and I'm sure every one of you has experienced that phenomenon, so we need to be able to update the software. But in order to do that, we have to make sure that the receiving device knows where did this software come from. Does it still have integrity? Has anybody modified it while it's on its way from the source to the updated device? So we need to use things like digital signatures in order to make sure the software is coming from a safe source. That's why strong end-to-end authentication is an important element in this dynamic environment. I think everybody here would agree that confidentiality and privacy are very important. Most recent legislation passed in the European Union called the General Data Protection Regulation, and the GDPR emphasizes how powerfully important it is to protect people's privacy in this increasingly online environment. Bob Kahn and I did make one small computational error when we did the original design. We thought, well, let's see, we've just finished building the ARPA net. That was not exactly a trivial effort, and it was a national-scale network. And so as we were designing the internet, which we knew had to be global because the Defense Department had to be able to operate everywhere in the world, we said, okay, let's see, how many networks will there be per country? And we thought, well, there should be at least two, so there'd be some competition. And then we asked ourselves how many countries are there, and there wasn't any Google to ask. So we guessed that 128, because that's a power of two, and that's what programmers think in. So that's 256 networks. And then we said, how many computers will there be per network? And we said, well, let's go crazy. How about 16 million? That's 24 bits of address space. So 8 plus 24 is 32 bits of address space, densely allocated that would give you 4.3 billion terminations, which is more than there were people in the world at the time. And so we thought that surely is enough for an experiment. And it was, in fact. But then around 1989, the internet became commercially available. And so the experiment escaped into the public. And we knew somewhere around 1992 or so that the 32-bit address space was not going to be enough because of the proliferation of local area nets, for example. And eventually, of course, mobiles and IoT devices. So the Internet Engineering Task Force developed IP version 6. You're using IP version 4. Some of you are using IP version 6. If you don't know that, that's a good thing. You shouldn't have to know. It has 128 bits of address space, which if you do the math is 3.4 times 10 to the 38th addresses. That's 340 trillion, trillion, trillion addresses, which should be enough to last until after I'm dead then it's somebody else's problem. So I used to go around telling everybody this means that every electron in the universe can have its own web page. Until I got a note from somebody at Caltech. Dear Dr. Surf, you jerk. There's 10 to the 88th electrons in the universe and you're off by 50 orders of magnitude. So I don't say that anymore. But it's important to get this additional address space in. So please, if you have an ISP that's serving your needs, call them up and say, when can I get IPv6? Because they're complaining that nobody is asking for it. I don't think anybody should even know about it. So please ask and see if you can get a good answer from them. Let's see. I'm not going to go through all the rest of this stuff. Oh, there's one thing. The stable identifier thing is very important. Those of you who use the World Wide Web, and I'm sure we'd all raise our hands if I ask, recognize what a URL is, the uniform record locator. And you'll notice that inside of a URL is a thing called a domain name, which I'm sure you're all familiar with. Domain names are not stable. The reason they're not stable is that you have to rent the domain name. And if you forget to pay the rent, it may go away. Somebody else may acquire it. And so the identifiers for references to papers and websites and other things may go away. Sometimes you'll type a URL and you'll get back page not found, error 404, page not found. It may be because somebody has forgotten to renew the domain name. Think of all the references that there are in printed papers to URLs that may no longer resolve in 10 years or 15 or 20 or 30 years. So I'm very concerned about this. I'm concerned about preserving digital information, which is actually not as solid and stable as some other media are. I mean, there's Vellum, which has last for 2,000 years. There are photographs that last 150 years. How long do you think a CD-ROM is going to last? Or, you know, it's polycarbonate disc. Does anybody here remember five and a quarter inch floppy disks? Three and a half inch floppy disks? CD-ROMs, DVDs, Blu-ray? Sometimes the bits are still there. You just can't find a reader to read them. So I'm very worried about digital preservation. I have a very long half hour rant on that. I won't do that to you. But I do want you to know that people are working on this problem to make sure that the bits that you are creating and want to preserve can be preserved for long periods of time. So that's the plan. There's more things. I've already talked a little bit about Internet of Things. I'm very concerned about our focus of attention now on artificial intelligence and machine learning. And the reason is that it's dramatic technology. With multi-layer neural networks, we are seeing just extraordinary things happening. Those of you who've seen the headlines were Google's AlphaZero, AlphaGo, successfully played against some of the world's best Go players in Korea and in China. We see machine learning showing up in machine language translation, for example. Natural language translation by machine. Although it's not exactly perfect. It's a tremendous improvement over what it used to be. But I had one incident when I was in Heidelberg, actually. I was looking up the weather. And I didn't think about it. But the website I got was a German website. But the translation was done so quickly by the Google Chrome browser that I thought I was looking at an English language weather site. So I was looking at the weather report, and it said, chance of rain, 0%. Chance of fog, 0%. Chance of ice cream, 0%. So I took a screenshot of that and I brought it over to my Heidelberg friends and said, you have ice cream storms here? Well, it turned out that ice, EIS in German, it could be translated either hail or ice cream, and it's very common to use that in the ice cream context. So I sent that to my natural language friends and said, I think our machine learning algorithm needs a little tuning. I am worried, though, about the brittleness of some of these algorithms, the machine learning algorithms. And so I bring this up only to express to you how important it's going to be for the computer science community to be attentive to the brittle character of some of these things. A typical example of this would be image recognition. You could train a multilayer neural network to recognize images of animals, cats, dogs, kangaroos, things like that. And the problem is that after you get done doing all the training, it's possible to take an image and alter just a few pixels of a dog, for example, and have the system assert that it's a kangaroo. And when I'm going into the details of why that happens, it is a side effect of the way in which the training works that it can be quite sensitive to small changes in the images. The same kinds of small changes that might cause language translation to fail. So the thing we should be careful about is to be, let's say, a little skeptical about the machine learning and AI algorithms that people put forward with their spectacular results to keep in mind that they may not always be perfect and they may, in fact, be brittle in places where we at least expect. Anyone who reads the headlines knows that we have big problems with misinformation, disinformation. We look for ways of trying to detect that, to filter it out, or to at least warn people that they're seeing information which is invalid. The problem that we have is that it's not easy for a machine to figure out that something is disinformation or misinformation, particularly if we're using classical social networking means where people are voting and saying likes as they say in Facebook, we have the problem that it looks like the masses are saying, or the crowds, let us say, are saying this is the useful good information I'm liking and I'm pointing to it. Some of you will have heard of the Turing test. The original Turing test was a person interrogating a person and a machine computer. Not seeing either one, but simply exchanging messages and the task of this person was to figure out which is the machine and which is the person. If the interrogator fails to tell the difference, then the computer in that picture has passed the Turing test because it can't be told the difference between that machine and a human. I'm inventing on the spot here Turing test two. This is a computer interacting with another computer and a human being trying to figure out which is which. And if the computer that's doing the interrogating can't tell the difference, then it has failed during test two. And the problem that that raises is that bots, these are programs, machines that have been taken over by botnet herders and used to pretend to be people in social networking environments, for example. So the bots raise their hand and say they like this or they introduce content on the net. And if we can't tell the difference between a bot and a human being, then the crowdsourcing that the bots are involved in will mislead us into thinking something is legitimate when it isn't. I don't think technology is going to solve this problem entirely. And so I am a big fan of something called critical thinking. So when people are reading things on the net, I want them to ask where did this come from? Who put this up there? I think we're operating evidence for what's being presented. Is there some motivation that I should know about, about somebody putting this content up? I think young people should be taught critical thinking. I think old people like me should be taught critical thinking about everything we do. That's the scientific method. And that's what I think will help us deal with some of these problems. Again, yes, thank you. Let's do that. K through 12 can't start too early. So with regard to digital literacy, I think some of these issues to be worried about are exactly the sort of things that you should be conscious of, and we should be teaching our children to be conscious of as well. It's pretty clear by now that there's a lot of bad stuff that happens in the net. And it's the consequence of creating this big platform on top of which you can do almost anything you want to. It's a neutral platform, and it doesn't necessarily protect against the things that some people choose to do, whether it's generating and distributing malware or spam or committing fraud or bullying people, all these other bad things that people do. And I'm sorry, that's the human condition. That's what Shakespeare teaches us. So we have to cope with that, just as we have other bad behaviors in other contexts. One way to do that is what I'm calling now, traceability by design. And without going too deep into this, I would like it to be the case that while people should be able to appear to be anonymous to most of the general public, it should be possible to find out who somebody is in the event that they're doing something harmful. And this is not too different than the license plates on your cars. For the most part, we don't know who is attached to the license plate, but we empower certain parties, the police department, the department of motor vehicles, to make the connection between the license plate and the person who owns the car under the condition that there is a traffic violation or an accident or other kind of misbehavior. So this idea of sort of differential traceability, I think, is worthy of thinking about in the internet context. With regard to malware and buggy software, I am embarrassed to tell you that I used to make my living writing software. And I found that it was almost impossible to write software that didn't have bugs. It's embarrassing to think that for 80 years we've been writing software for various machines, not me personally, but the profession. And although that would be kind of a cool claim to make, wouldn't it? I remember the... So the problem is that we haven't figured out how to avoid the bugs. So that means that not only do we have to find a way to fix the bugs, but if we buy equipment, IOT devices, Internet of Things, the company that sells them should feel a moral and ethical obligation to support the software and fix the bugs for the lifetime of the equipment. And if they don't do that, it feels like they're violating our trust and confidence. So I think we have to build an ethic around a lot of this stuff, especially if the mistakes that the software induces could be fatal in some cases, like in the self-driving car, or could be just damn annoying like your kitchen burns up because the oven is misbehaving. So I think we need to imbue our companies with a sense of responsibility there. And that is, of course, the whole point about ethics. So I'm going to... I still have some time, I think. So I want to switch gears now to tell you about a project that was started in 1998 at the Jet Propulsion Laboratory and is now expanded not only to include other of the NASA laboratories, but also the other space-faring nations in Europe and Japan and elsewhere. So we started out convening just after the Pathfinder had landed on Mars. We were pretty excited about that. It was 1997. There was a little rover that was able to move around after it got off of the landing platform. And it was the first successful mission to Mars in some 20 years because the previous successful mission was the Viking mission, which landed two landers in 1976. And then there were a whole series of failures, U.S., Russian, and others, that trying to get to Mars was really hard. So we got all excited about the fact that we finally landed something on Mars after that 20-year hiatus. But we realized at that time that the mechanism for communicating, for both commanding and getting data back from the Mars rover, was a point-to-point radio link. And in fact, the equipment that's right here in Canberra had a very critical role to play in that because the Deep Space Net was that link. Each of the three nodes here in Canberra, in Goldstone, California, and Madrid, Spain were our link to Deep Space. But we thought, what would happen if we could build an Internet in space to link the planets together, to link the spacecraft together to support both manned and robotic space exploration? So that was the beginning of our conversation. We were motivated originally by the Sojourner Pathfinder mission in 97. We began detailed work on the design, and we started out thinking, you know, TCPIP works okay on Earth. Maybe it would work okay on Mars. And so we began thinking, maybe it would be a trivial thing to make the extension. We discovered very quickly that that was wrong, and there are two reasons for that. The first one is that the speed of light is too slow. The distance between Earth and Mars is 35 million miles when we're closest together in our respective orbits. It's 235 million miles when we're farthest apart. It takes between three and a half minutes and 20 minutes one way to transmit a signal between the two planets, and of course takes the comparable amount of time to come back. The TCP protocols used a very simple flow control scheme. When the receiving device says I'm out of room, it sends a signal to the other guy saying, don't send any more, I'm out of room. Well, if it's a few hundred milliseconds before the other party hears that, which is the case in the Earth-bound internet, it works fine. But if it's going to be 20 minutes before the other guy hears you say, stop, they're going to send stuff at you full blast and it'll fall on the floor and the packets will all blow away. So we said, that's a problem. And then we ran into the other problem. Well, the planets are rotating, and we don't know how to stop that. So if you're talking to something on the surface of the planet and the planet rotates, eventually you can't talk to it until it comes back around again. We have the same problem with the orbiters. So we said, okay, look, we have a variably delayed and disruptive system. So we have to design a protocol suite that takes that into account. Indeed, we've done that. And by 2004, we had a prototype new set of protocols for interplanetary communication. Then Spirit and Opportunity landed on Mars in January 2004. And originally we're planned to be there for 90 days in operation. It is 14 years later. One of them ain't working anymore, but the other one is still there. So they have a very long life. We ran into a problem. They were going to transmit data from the surface of Mars directly back to Earth to the deep space net. It was going to go with a blazing speed of 28.5 kilobits a second. And scientists like Professor Schmidt were not very happy about those numbers, and then the radios overheated. And the engineers said, well, we're going to have to back off on the duty cycle here to keep the radios from harming themselves or the other equipment. Now the scientists are even more upset. And then one of the JPL guys says, well, wait a minute, there's an X-band radio on the rovers. There are X-band radios on the orbiters that we sent earlier to map the surface of Mars to figure out where the rovers should go. But they finished that job, so we reprogrammed the rovers and the orbiters to form a packet switch store and forward network using the prototype interplanetary protocols. So since 2004, all the data that's come back from Mars is using the prototype interplanetary system. When Curiosity landed on Mars, it was equipped with the store and forward interplanetary protocols as well. So again, I don't have time to go through all the details. I've already mentioned a few of these things. Oh, there's one other interesting little factoid. Everybody here who uses the internet implicitly uses something called the domain name system. The way that works is that the URL, the domain name that you use, is looked up in the system and what comes back is an internet protocol address. And that's the thing that you actually open a TCP connection to. Now imagine for a moment, you're on Mars and you want to make reference to a domain name because you're going to use a URL to look up something on the web. So you send the packet to Earth and it takes 20 minutes to get there and the lookup happens and then 20 minutes later an answer comes back. Now the problem is what if the answer is referring to a mobile device which has moved in that 20 minute or 40 minute period, you got the wrong answer. So we realized that we were going to have to do something different. That's why the DTN, delay and disruption tolerant networking protocols have very different features from the original ARPANET design. Some of them involve delay name resolution which just means if you're sending something to another planet, figure out which planet you're going to first. Get there and then find out where the device is that you're trying to get to rather than trying to package that all together into one action and so on. So I mean, again, there is one other really interesting thing about network management. One of the favorite tools of network managers of the internet on planet Earth is something called Ping. It's just send a packet to the other guy saying send me something back so I know you're there. Well, so if you get something back in a few hundred milliseconds you can kind of rely on the fact that whoever it was you were talking to is probably still there. But imagine a multi-hop delay and disruption tolerant networking thing where it's not exactly clear when anything is going to come back or even whether anything is going to come back. So Ping isn't your friend anymore because it's not a real-time environment. So we've had to redesign and rethink network management to take into account that the notion of the same time doesn't actually work because of the distances that are involved. And we have the other thing that we added to the interplanetary protocols is strong cryptographic authentication and encryption because I told the team the last thing I want is a headline that says 15-year-old takes over Mars net. I said, no, we don't want that. So I mentioned earlier that we put the prototypes up in 2004. We've been using this CCSDS file delivery protocol. CCSDS, by the way, is the Consultative Committee for Space Data Systems. There is such a thing. It's part of the United Nations. All the space-faring nations are members of that. And that's where we go to standardize the interplanetary communications. And so they have now standardized the delay and disruption tolerant bundle protocol. It's available on GitHub, which is an open-source platform, which is just acquired by Microsoft. And so anyone who wants to use these protocols is free to do so. And we're hoping that the space-faring nations will take advantage of the fact that we've now tested these systems and made them available free of charge. So you've already seen some of these pictures. One of the other landers is the Phoenix lander. Is this thing actually working? Yeah. The Phoenix lander landed on the North Pole of Mars. And there wasn't any configuration that lets you get back to the deep space net from there. So we used the relay system again to communicate with it. So there had been a whole series of refinements and testing of the DTN protocols since the time that we started. One of the most interesting ones was to take the Epoxy spacecraft, which has visited two comments and they let us upload new software to it so we could test the DTN protocols from 81 light seconds away just to confirm that it would work properly in a space environment. We've been using the International Space Station to test these protocols as well. And one of the more interesting cases was a real-time case. Remember now, we designed this to handle these really long delays. But if you're close enough, then the protocols work very well in an interactive mode. So we had a small rover in Heidelberg being controlled in real-time by one of the astronauts on the International Space Station. So he was steering the thing around. That doesn't work for the rovers on Mars when you're on Earth. Because of the round-trip time delay, the rover goes over the cliff. 20 minutes later, you find out your $6 million spacecraft has gone away. Not a good idea. So we have done those tests. By the way, we've done those with the European Space Agency. And so this again is a very international activity. We've done one other thing which really excited us. We tested the protocols with optical laser communication. So we're up in the 600 megabit per second range as opposed to, remember, the 28 and a half kilobits a second in one of the earlier efforts with Mars. This was a test just to the moon and back. But it showed that the protocols are capable of operating successfully over this really big span of data rates and distances. There's been continued work with the Japanese, JAXA and with NASA on testing different implementations of the same protocol suite. Again, very important. We're standardizing these protocols in the Internet Engineering Task Force for commercial applications if anybody's interested in using that. So at this stage of the game, oh, we have another project that's really fun. 2008 to 2011 and now again in 2018, Lulia University in Sweden is using the protocols to monitor reindeer that are being wandering around by the Sami tribes or being managed by the Sami tribes in the northern part of Sweden. So we have reindeer outfitted with these radio transceivers and because they kind of wander around at random, the delay and disruption tolerance of the protocols makes it possible to move data around very comfortably even when the connectivity is missing. It just holds on to the data until a connection comes up and then you transmit the data from there. We even have a version that's available on Android for mobile phones if anybody's interested in that. Most recently, NASA commissioned a 90-day study to plan the full deployment of these protocols on all the new spacecraft that are coming out in the 2020s period and we've made that available to everyone who is interested. One thing I will say is that, by the way, this is the next of the last slide for anybody who's keeping track. What we're hoping is not to just build this giant interplanetary backbone and then hope somebody will show up. Not the plan. The idea is to make the protocols available for all the scientific missions that are being launched and when those spacecraft have completed their primary mission, they can be repurposed to be nodes of an interplanetary backbone and so you can kind of imagine over a period of decades that you could grow an interplanetary network to support both manned and robotic space exploration. So that's our current plan for the remainder of the 21st century. Not the end of the story, however. There's one more part. Many of us would like very much to get a spacecraft on its way to Alpha Centauri. That's about 4.3 light-years away. And in order to do this in 100 years' time, we're going to have to build propulsion systems that can get us up to something like 20% speed of light. And oh, by the way, we have to also slow down before we get there. Otherwise, we'll get one picture of kind of an expensive photo. So propulsion may turn out to be suitable ion engines of significant amounts of thrust are being built. There was a, Brian, you probably know about this, there was a report of a very peculiar microwave engine that was a closed device, conical in shape. And the claim was that by firing microwaves inside this enclosed box, it somehow would move, which makes you kind of wonder about basic physics. Recently, everybody was all excited about this, but recently further testing has suggested that it doesn't actually work. So it may be that the ion engines are our best choice. There are two other problems associated with going to Alpha Centauri. One of them has to do with navigation, because at some point in such a mission, you will almost certainly need to tell the spacecraft to change its course to make a course correction. It's very typical that we do this in interplanetary missions, where part of way through we'll send a signal telling it to fire the rocket for a certain amount of time in a certain direction in order to course correct. But imagine that you have a spacecraft that's a light year away. So it takes a year to send it a signal and then it takes a year to find out what happened. So it's not very interactive. We're going to have to make this system autonomously, navigate autonomously. There is a recent report that pulsars in our galaxy may turn out to be adequately regular, that we could use them like a galactic GPS. So we're redefining GPS as galactic instead of geo. And the pulsars may actually allow the spacecraft to navigate successfully, because those pulsars over a course of 100 years are not going to move very far from their position. And so it may be adequate to do the navigation. So the other problem we have to deal with is communication. The question is how do we generate a signal that can be detected from four light years away? And so that's my problem. And one thing that I'm attracted to is the possibility of using a pulse laser where you take maybe 100 watts of power and you compress the signal down into the minus 15 seconds, typical femtosecond laser, which should give you a fairly big spike. However, even with a collimated laser, the beam is going to spread over four light years. So now we have a very attenuated signal coming back to the solar system. So one possible solution to this problem is to put up synthetic aperture receivers in the interplanetary backbone, now you know why I need the interplanetary backbone, to reintegrate the signal coming from Alpha Centauri. That's one possibility, but there's another one. And that's the possibility of using the sun's gravitational field to take the signal and focus it to a focal plane, which is about 55 astronomical, 550 astronomical units away, is the focal plane of the sun. So all we have to do is to send the spacecraft about 55 billion miles away to the focal point of the sun and have it oriented in the right place so that we can pick up the signal from the Alpha Centauri probe. Now, we've never sent anything that far away from Earth, but today the guys at the Deep Space Net at the Canberra location said that they're still communicating with Voyagers 1 and 2, which are about 13 billion miles away. So, and that's taken about 34 some odd years to get there. So we may actually be able to send a spacecraft to 550 AU away in a lot less than 100 years in order to test some of these ideas. So that's the up-to-the-date on the Idris Stellar communication system, which I won't be around to see, but it's so damn much fun just being at the front end of this trying to figure out your way through it. So I'll finish there. I thank you very much for your attention, and I'm happy to engage in some discussion. Thank you.