 My next speaker doesn't, I don't think needs an introduction, but I first heard of him in high school. I was reading a book called, Where the Wizards Stay Up Late, The History of the Internet. If you've not read it, it's a fantastic overview of just all of the early days of the internet, how things came together at various institutions to build what we now, most of us, run our careers on today. Everything from how this conference was organized to what we did through the pandemic to how I get cat litter through the mail. I'll organize and coordinate it over to CPIP. I don't know if Dr. Surf had that in mind when he did it, but that was interesting. So yeah, we're super excited. The other thing that's interesting piece here is Dr. Surf did this while he was at UCLA. And for those of you that don't know, when scale started 20 years ago, that was done by students at UCLA and USC and UCSB and CSUN and a few other local schools when we were all, just but we children in our freshman and sophomore years of college thinking that running conferences would be easy. One thing I can tell you, it is that after 20 years, the only thing I have learned is that it is not easy. But yes, so we're super excited to have Dr. Surf and one of the things that has been a pleasure for me over the last 20 years is just getting to meet my internet and sort of CS heroes as we bring them on stage here for keynotes and further activities. And so without further ado, I give you Dr. Surf is joining us all the way from Virginia and appreciate him making the time as well as for joining us. So thank you, Dr. Surf. Well, thank you all very much. You know, when people clap before you said anything, my first reaction is to sit down because it won't get any better than that. Now, you all clearly understand the tactic that's been employed here. You wait until the last session of the four days of grueling, fantastic. I looked at the list of sessions. Holy moly, you guys have got an enormous amount of useful information. And I'm sure in the corridors in between lots of other very useful ideas have been flying back and forth. So what you do on the last days, you get the talking dinosaur to come out. And it doesn't matter what the dinosaur says. The fact that he can still talk is amazing. So I have been given a few suggestions though about things to talk about. And I understand that when you get to be my age, you tell anecdotes. And so we're gonna go looking to the past and look at a few anecdotes. And I see more than a few gray hairs in the audience which means, yeah, I don't have any left, but my beard is doing okay. So for some of you, this will be, I hope, a kind of a fun reminiscence of things past that have taken us to where we are today. And where we are today is a big challenge. And I'll try to close my talk with some thoughts about things that we might collectively do to meet some of those challenges. And what I find most appealing, I think, about you in particular is that you've been doing this conference now for 20 years and there is this fellow feeling of collaborative sense of responsibility that you bring to the table that I wish everyone who writes software would bring to the table. So we're gonna come to that in the last few thoughts of this talk. But let me start out by jumping into the time machine for a second and take you back to 1969 at UCLA, not very far away. There were four nodes by the end of December, 1969 in operation at these four locations which were selected by the Defense Advanced Research Projects Agency for particular reasons. UCLA was selected as the first node because Len Kleinrock, who is still a professor at UCLA, ran the Network Measurement Center. And my job as a graduate student there was to write software for the Sigma-7 machine to measure the performance of the ARPANET and compare with the queuing theoretic models that Len Kleinrock students were developing so we could compare what the predictions were from the queuing theory models and see what actually came out from the measurements. And I can tell you that the queuing theoretic models were always beautiful and mathematical and pristine and everything else, but they didn't always predict what actually happened in a real network. And I suspect that every one of you knows exactly why that's the case. So UCLA is Network Measurement Center, then SRI International is the second node. And that was because Doug Engelbart was at the Augmentation Research Center at SRI International. His belief and the belief of J.C.R. Licklider, who was running the Information Processing Technique Selfless of DARPA, believed that computers could be a way of augmenting our capabilities. And I would say that they were right. Our capabilities today, what we do every single day, are often augmented by the kinds of software that you write and other people write to help us do things that we couldn't do on our own, not least of which, for example, was searching the entire World Wide Web. So that was the second node. And then third one was UCSB because they were doing some really interesting work on the presentation of complex functions on a screen so you could see what the computations were producing. And finally, University of Utah because they were very much involved in computer graphics at the time. Now you can imagine computer graphics in 1969 not like what they are today. But one of the guys at Utah invented one of the first hidden line removal algorithms. So if you were doing 3D rendering and you wanted to show what it looked like, what the surface looked like, you had to remove the parts that couldn't be seen. So he found, it was John Mornock, whom some of you will know, started a famous company called Adobe some years later. So that's 1969, many years ago. And this is a picture that was taken in 1994. It was the 25th anniversary of the ARPANET. On the left you see John Postel who sadly passed away in 1998. In the middle was Steve Crocker who led the network working group that did all of the host to host protocols, the Telnet protocols, the FTP and eventually SMTP for email. So he was the leader of that group and he was at UCLA, as was John Postel. All three of us went to the same high school, Van Nuys High up in the San Fernando Valley. I don't know, must have been something in the air. But we became good friends at UCLA as graduate students. It took us all day to organize this shoot for Newsweek Magazine because we had to draw those pictures with the backdrop hanging there. Then we had to go find the zucchinis and the yellow squash and the five pound chins of coffee and then string it all together. Now, this is Newsweek Magazine 1994 and we thought we would put up a geek joke for those who understood it. So if you look carefully at this network, you notice that it's mouth to mouth and ear to ear but there's no mouth to ear. This network would never work. So that was our little geek joke in 1994. But we were celebrating that anniversary of the ARPANET because without it, we would not have even bothered to do the internet. So now I'm gonna skip very fast forward here. Past the original early designs of TCP and then the later GCP IP. This was a demonstration that I called for after I moved to Virginia in 1976 to run the internet program for the Defense Department. And I wanted very, very much to show that the TCP IP protocols could really work across different kinds of networks over considerable distances. So we had a mobile packet radio network in the San Francisco Bay area, driving up and down 101, radiating packets like crazy through a gateway which had been configured to send those packets all the way across the ARPANET which by that time extended into Europe over an internal satellite hop from E-Tam, West Virginia to Tonham in Sweden. And then a landline to Norway to the Norwegian Defense Research Establishment and another landline down to University College London and then popped out of the ARPANET at that point into another gateway that led to the Atlantic Packet Satellite Net which was based on IdleSat 4A hanging over the Atlantic with multiple ground stations all vying for access to one single satellite channel. So it was like having an ethernet channel in the sky. And then through the packet satellite network to E-Tam, West Virginia down to another ground station back into the ARPANET all the way across the US again to USC Information Sciences Institute. So if you do the math, of course, going from the roving vehicle in San Francisco down to Marina del Rey is about 400 miles but the packets had gone through two synchronous satellite hops back and forth and so it's about 100,000 miles. And I remember when we did it and it worked I was jumping around in my office saying, it works, it works, you know like it couldn't possibly have worked. Listen, if it's, you know, you know that if it works, if software works, it's a miracle. So. So. So that was a really important demonstration from my point of view as the program manager for this. Now we'll skip forward. Now we're in the early and mid 1980s after the ARPANET and those three network demonstration was done and further standardization was done then we implemented the TCP IP protocols on every operating system we could get our hands on. In 1982, John Postel announced that we are going to switch over from the NCP host to host protocols of the ARPANET to the TCP IP protocols of the internet on January 1, 1983. So this is January of 82, we make this announcement and everybody sort of wumbles a bit, you know everything seems to be working okay with the host to host protocols and Philnet and so on. So why do we have to do that? And you say, well, you know if you don't do that, I won't fund your program next year. Oh, okay. I get that. So I had Dan Lynch whom some of you know was the founder of Interop was measuring how many implementations of TCP IP he could detect on the ARPANET. And so he would report to me, you know once every couple of weeks and I could see that you know the career was going up and then somewhere around the summertime it flattens out. So, you know, I'm a bureaucrat at this point so what do you do? Well, you clearly create incentives. So I called the Defense Communications Agency it's now called DISA and I said, will you shut off the capability of the ARPANET to carry anything except TCP IP? They could do that. It turns out there was a way to do that. And I said, turn it off for a day. So they did. Of course, the phone bringing off the hook. What's the matter with you? You blankety, blankety, blank. You know, can't get any email, you know files don't worry you. And I said, I just want you to know I can do that. So, you know, oh, okay. So curve starts going up again. Then somewhere around late summer, it, you know, October, it flattens out again. So I called DCA, turned it off for two days. Phone rings off the hook. Everybody made it on January 1st, 1983, except for two guys and they pleaded, you know for some kind of mercy. And we said, okay, we'll give you another month or two. So everybody made it up. Now, how many everybody's were there? There were only 400 computers on the internet at that point, 400, as opposed to 400 million or whatever the numbers are today. If you include the IoT devices and the mobiles, we're into the multiple billions. So now, so it's now 1983. NSF is starting to invest in internet technology for hooking computer science departments up to the ARPANET and then to the NSF network backbone. And they've quickly run out of gas. They essentially sent out an RFP asking for a higher speed network. Now, remember the backbone speeds of the ARPANET were 50 kilobits a second. Okay, that was broadband in 1969. So NSFnet launches with 50 kilobits and basically runs out of gas. It switches to 1.5 megabits and lasts for a little while and then it switches to higher speeds and we kept just kept going. Eventually we end up in the multi gigabit range. So NSF makes this big investment. They build a backbone network. They build about a dozen intermediate level networks to serve about 3,000 universities. The purpose for which is to help those universities get access to the five super computer centers that NSF and the Department of Energy are investing in. So the NSFnet comes up and the super computer centers come up and not long thereafter, NASA and the Department of Energy say this looks like a good thing. So DOE builds the ESnet backbone and NASA builds the NSI or NASA science internet backbone. So during the mid to late 1980s, we're starting to see substantial implementation. Of course, if we go to look at the internet today, now some of you will know I'm cheating, right? Because this picture was actually generated from the BGP backbone back in 1999. But it was so colorful that I thought I would keep it because it lets me illustrate something that you know. And that is that the internet is very dispersed and there are literally tens of thousands, if not more, networks running. Each operator picks hardware and software to run, decides who to connect to and what terms and conditions. There were no central decisions about your business model, who you connected with. The only thing we asked of everybody is please run the same protocols so that you could interoperate. And we wanted to encourage Bob Kahn and I to encourage people to build pieces of internet, find someone to connect to and let the system grow. So that's what it looks like today, except bigger and more colorful. It is true that there is only one centralization element and that is trying to coordinate IP address allocation and domain name assignment so that they're unique. And that's pretty important. But IKAN doesn't dictate what is done with those things. It just tries to make sure that if you want a domain name that it's assigned uniquely to one organization or an IP address block to an autonomous system. So this is just my little memory of milestones. I'm sure that every one of you who's been engaged in this game have your own favorite milestones. So by no means should you take this as the definitive list of important milestones. They're just the ones that I happen to remember. One very important one comes after the internet goes operational in 93 and Cisco systems figures out that they can make money by selling routers to universities that want to get hooked up to the net and put local area networks in place. Oh, by the way, I did leave out something important. 1973, Bob Kahn comes to visit me in my office at Stanford and says, we have a problem. What do you mean we? And he says... And he says, well, the ARPANET worked and now the Defense Department wants to see if it can use computers in command and control. But he right away realizes if you want to do that the computers are gonna be in mobile vehicles and ships at sea and airplanes. And the ARPANET was built out of dedicated telephone lines connecting everything together. Well, you can't connect the ships together that way because they get all tangled up. The tanks run over the wires and they break and the airplanes never make it off the ground. So he was already starting to work on the packet radio net and the packet satellite net at the time that we met. So we started working on the TCP design. A mile or two away Bob Metcalf and David Boggs are busy inventing Ethernet at Xerox PARC. So that was the fourth packet switching technology that was born during this 1970s, early 1970s period. So Cisco is the first to start commercializing this stuff. The way we used to build routers was to find a computer and a graduate student and you wrap the graduate student around the computer and turn it into a router. The problem with that is we were running out of graduate students. So Cisco figures that out. Also in 84, it becomes very clear that this thing is scaling up and that we can't go keep sending the host.txt file around to map the domain names into IP addresses. So we needed something that was more scalable. So Paul Maka-Petrus and John Postel invented DNS and it evolves over time. But it certainly has scaled dramatically. It's a pretty brilliant piece of work. So around 1988, by this time, Dan Lynch is doing Interop. And it's up in the San Francisco Bay area. And I walk in in 1988. He started in 86, this small thing. It was mostly lectures. But then it was an exhibit. Oh, and the deal was you can't exhibit unless you show that you can interwork with everybody else. So they bring out the show net, this big fat yellow coaxial cable. You all had to plug into it and then show that you could talk to everybody else in the show. So Eric Benamoo, who was then the CEO of 3Com, the company that Bob McAfs started to sell ethernet. And I walked into the show and the first thing we encounter is a two-story Cisco thing display. And so I turned to Eric and I said, Eric, how much do those displays cost? And he says about $250,000. This is 1988. And I'm sort of sitting here and saying, ah, that's a lot of money. And he said, that doesn't count. The people who had to stay there man the booth for a week. And so I'm just standing there, my jaw is dropping, thinking, somebody thinks they can make money out of the internet. That's amazing. So at this point, I'm starting to wonder, because how are we gonna let the rest of the general population get access to this thing? Because up until that time, all of the networks were funded by the government agencies. NASA, DISA, NSF, DOE, and DARPA. So there was an appropriate use policy that said no commercial traffic will flow on a government sponsored backbone. And you could imagine a rationale for that. They didn't want government resources to support commercial activity. But it became increasingly clear that even the people who were doing research under government grant actually needed access to commercial services that could be reached on the internet if it were permitted. So at this point, I'm working with Bob Kahn at his company, the Corporation for National Research Initiatives. And I had taken a little break from the internet from roughly 82 to 86 to build something called MCI Mail, which is a commercial email service. So in 88, I'm sitting here thinking, okay, so what could I say to the federal government that would let me break the appropriate use policy without really appearing to want to do that? So I called the Federal Networking Council, which at the time was mostly NASA, DARPA, NSF. And I leave out NASA, DARPA, NSF, and DOE. Their program managers were the people who managed the internet policy. So I called them and I said, would it be okay if I tried to connect the MCI Mail system to the internet to see if we could get the email stuff to interwork? And to my surprise, they said, well, okay, for a year, this is just an experiment. So by summer or so of 1989, we announced we have a gateway between the MCI Mail system and the internet. And as soon as we make that announcement, all of the other commercial email service providers, which are islands into themselves, you can't talk to anybody unless you all have an account. So things like telemail and on time, compu-serve. Yes, all of those, say, wait a minute, these MCI guys can't have this special treatment. We want on too. And so the government says, okay, so they all get hooked up and two things happen. The first thing that happens is they all discover that all of their customers who used to be trapped in a wall of garden can talk to all the other customers of their competitors, plus everybody on the internet, because they were all compatible through the internet mail protocols. And that was a little surprise. And then later mail becomes almost free so much for that business model. But the second thing that happens about the same time, maybe a little later in the years, three commercial internet services pop up because we've just broken the AUP limitation. So UU-NAT in Virginia and PSINAT in Virginia and SURF-NAT in San Diego all get started in 1989. Okay, this is anecdote number, whatever, 17. SURF-NAT used to be spelled S-U-R-F-NAT. And of course you do that. You're in San Diego, what else would you do? So they had a whole campaign all laid out, T-shirts, you know, SURF the internet and all this stuff. And then a couple of weeks before they actually launch somebody discovers that there's an organization in the Netherlands called SURF-NAT, which is, you know, it's a Dutch acronym and they are building a network to connect the universities in the Netherlands. So they can't call themselves S-U-R-F-NAT. So Susan Astrada was the executive director at the time and so somebody says, why don't we change our name to the California Educational Research Foundation Network? Because, you know, it sounds the same. And then somebody says, maybe we should call Vint. So they call me up and they say, can we call it SURF-NAT? And my first reaction was, you know, if they screw this up, am I gonna be embarrassed? And I thought about it some more, wait a minute now, people name their kids after other people and if the kids don't come out right, they don't blame the people, I named them after them. So I said, sure. So I flew out to California in 1989 and Susan and I had one of the plastic bottle full of glitter and we smashed it on a Cisco router and we launched the SURF-NAT. So by that time we're starting to see real commercial services pop up. Rick Adams was the founder of UUNET and so we're talking 1989. So in 1997 he sold the company for $2 billion to metropolitan fiber systems which on the same day was acquired by Worldcom for $14 billion. So he made out okay, that worked, that was good. So just picking a few more of these things, the big deal after commercialization of course is when Tim Berners-Lee announces the World Wide Web which is late December as I recall and I don't think too many people actually noticed. He was doing it on a Next machine which was very cool at CERN. But not too many people noticed and except for these two guys, Mark Andreessen and Eric Bina at the National Center for Supercomputer Applications in Urbana-Champaign and they look at the text-based interface and they say boy, it would be really cool if we could make a more graphical interface, wouldn't it? So they do Mosaic, comes out around 1993. Everybody notices because suddenly the internet looks like a magazine with formatted text and imagery and eventually streaming audio and video. So that was a big deal and Jim Clark, the founder of Silicon Graphics, takes one look at the Mosaic browser and he says this is a big deal. Remember, he started Silicon Graphics which had turned out to be based on another chip that ARPA funded called the Geometry Engine. So he flies out and he takes Mark Andreessen and Eric Bina and maybe a few other people back to the West Coast to start Netscape Communications in 1994. And by that time, I had left Bob Conn's organization to rejoin MCI to put them in the internet business. And the first thing they wanted to do is build the MCI Mall, okay? So I fly out to Netscape and buy $7 million worth of licenses for Netscape's browser and server. And the first thing I asked them to do is to figure out how to avoid having my servers filled with partial transactions that won't ever get cleared away and I won't know when to get rid of them. So please store the partial transactions on the user's computer. And so they went away and came back with cookies. So if you're wondering where cookies come from, you can blame Mark. No, don't blame me. I don't know. So that, of course, they go public in 1995. The stock goes through the roof and the dot boom is on. The venture capitalists in San Francisco were throwing money at anything that looks like it might have something to do with the internet. And this goes on for a while. In 1998, in the midst of the dot boom, Google gets started by Sergey Brin and Larry Page. Yahoo gets started a little bit before that to surf the internet. The interesting thing about the arrival of the World Wide Web is that it triggered an avalanche of content that flowed into the net. It was so interesting. People were not looking for money for the content. They just wanted to know it was useful for somebody else, kind of like what you do. So this avalanche pours in pretty soon. Nobody can find anything because there's so much of it. So they need a search engine. Some of you will remember Altavista, which came out of the digital research labs on the West Coast, and then Yahoo comes along, which was kind of more manual, I think, than some of the others were. And then Google, of course, with its clever strategy called PageRank, which was very successful. It didn't have a business model to start with, by the way. There was no business model. But not very long after it got started and after they brought Eric Schmidt in as the CEO, sort of adult supervision, then this three-way business model evolved, which was quite successful. ICANN gets started in 1998. And the original idea was that John Postel would be the chief technology officer for ICANN, and it would manage the domain name system and IP address allocations through the regional internet registries. As I say, John passed away in September of 1998. But ICANN was really needed, and so they proceeded. Now, one thing I have not done is to tell you about the 1978 to 1993 protocol wars between the Open Systems Interconnection model and TCPIP to say nothing of X25 and X75 and on X29 and so on. That would take too long. So I won't do that. But a lot of people imagine that this is just smooth sailing and it was just step-by-step and everything was all planned out. And again, nope, it was Sturmundrang for many, many years, and it's still Sturmundrang today. So just pushing a little further into time here, a few other things. The dot-bust happens in 2000s around April. Big lesson there. A lot of the startup CEOs apparently didn't understand the difference between revenue and capital. And in Economics 101 says you have a finite amount of capital to get your revenue engine going. And if you don't get the revenue engine going, you will run out of capital and then what? So lots of dead bodies of startups in 2000. But the internet kept going. The demand for that capability was still very strong. So YouTube gets started in 2005. Amazon Web Services comes up in 2006. The iPhone shows up. I want to emphasize the iPhone for a moment because all of you, I'm sure, know what a transforming event that was. But now this is the anecdote number 17 or 18, I guess. Some of you may know that the mobile phone was invented by a guy named Marty Cooper. He was working for Motorola at the time. What you might not know is that he started working on it in 1973, which is when Bob and I start working on the internet and Metcalf starts working on something going on in 73. I don't know what it was, whatever we were drinking. So anyway, this thing gets started in the same year, 73. And it gets turned on in 1983, the same year that the internet gets turned on. So Danny Cohen, another name you might know, is very much influential in the internet's splitting of TCP and IP because of real time operation. He was all involved in packet speech, among other things. So Danny calls me up in early 1983 and says, come have lunch. I have something to show you. So I show up and he's got this thing sitting on the table. It's about this tall. It's got a whip antenna on it. And it weighs about 2 and 1 half pounds. And I said, what's that? He says, it's a phone. And I said, well, where are the wires? There aren't any. How does it work? So he says, we talked about it for a while. And he says, I don't know the answer to that. Why don't you call the guy that invented it? So I called Marty Cooper on a Motorola brick, which is what we called it then. And the first question I asked Marty was, how long does the battery last? And he says, about 20 minutes. But you can't hold the phone up longer than that anyways. So Marty blesses heart and presses on. He's still around. He's still around. He's in his 90s now. He's just written a book about the whole story of the invention of the mobile phone. But the iPhone is really triggering, as every one of you knows. When Jobs figured out something that none of us realized that we wanted, which is a device that had a camera, it had the access to the internet, it had access to the telephone system, it had a touch sensitive display. I mean, it's got all of these amazing features, all of which existed as a technology never put together in such an interesting way. That transforms everything because suddenly the internet is more accessible anywhere you can get a mobile signal you get to the internet. And of course, the mobile phone gets more useful because it gets access to all the applications that are running on the internet. So the two are mutually reinforcing. It's a really powerful event. So 2007 is a big deal. In the world that we now inhabit, in 2008, 2009, several developments came out of the academic world, ethane, open flow, and NOx. This is basically software-defined networking, which really has transformed the face of building networks today. And in fact, in 2010, Nick McEwen and his colleagues started a company called NICERA to build software-defined networks, and they were acquired by another company. It's very successful. There's lots more I'm not going to try to repeat the last 10 years of development. And you've lived those last 10 years anyway, so you know them as well as I do. But they are pretty astonishing, and you guys are a part of that. So somebody asked me to look back and say, what would you do differently? And so I decided I'd put a little list together. The first one's obvious, right? I would have done IPv6 first instead of IPv4. But it's been damn hard to cause an incompatible introduction of a new protocol at that low level in the architecture. In our own defense, though, we actually did a calculation when we were doing the original design of TCP to see how much address space we ought to have. And remember, it was an experiment, and we didn't know if it was going to work. So we said, okay, it has to work everywhere in the world, because it's going to be supporting the Defense Department Command and Control System. So we said, okay, so how many countries are there? Well, how many networks per country? And we thought, well, how about two? So there'd be some competition. Then we said, how many countries are there? And there wasn't any Google at the time they asked. So we guessed it 128, because that's in power of two. And we did the math, 256, that's eight bits. Okay, so we know how many networks we got to deal with 256 networks. And then how many computers per network? How about 16 million, which is at the time there were millions of dollars, and they didn't move anywhere. They were in air conditioned rooms and they were hooked together with wires. But we thought what the heck, and besides it rounds it out to 32 bits, which is cool. So, and we thought, that's 4.3 billion terminations if you could allocate them densely. Of course you never would, but if you could. And that was more than there were people in the world at the time. So we thought that ought to be enough for an experiment. Now I want you to imagine that you're a young vint serf in 1973 and you go and your future self goes back and whispers in your ear and says 128 bits of a dress space. And your younger self says, WTF, that's 3.4 times 10 to the 38th addresses. And you say, yeah, I don't think I can sell that. It doesn't pass the red face test. Your network has never even been demonstrated and you're telling me you need 10 to the 38th addresses. So I don't know if I would have gotten away with that. Now there is a huge mistake that I made for mobility support. This one, it's amazing how you can fool yourself into thinking you solved the problem when you actually haven't. I remember splitting TCP and IP and then we had to figure out how the TCP identifiers would work on an end-to-end basis. So we created a pseudo header sucking the IP addresses up out of the IP layer into the TCP layer and use that for socket identification. And since I had an operating mobile radio network at the time, I thought we had dealt with mobility except for one thing. I didn't think about the possibility that your mobile would move from one network to a different network that had a different IP address space. And at the time I was thinking, okay, I know, I can put another address space at the TCP layer so that it's okay to switch IP out from under the TCP. And those of you who know about Qwik know that the Qwik protocols establish a cryptographic shared variable at the Qwik layer. So if an IP address changes, not both, but if one of them changes, you can reconstruct the network. If they both change, of course, they can't figure out who to talk to, so that doesn't work. Anyway, I thought that the mobile problem had been solved. Clearly it has not been solved. And so that was a big mistake and I regret that. But at the time I was patting myself on the back for saving bits in the header. So be careful what you congratulate yourself for. The other thing that has occurred to me is that radio has wonderful features, one of which is broadcast. You can transmit in all directions if you want to. And we don't use that in any of our protocols in a serious way. We kind of could do it, especially with synchronous satellites. You can imagine a protocol where a bunch of stuff gets sent to lots and lots of receivers and the guys that didn't get it could raise their hand and say, please send me another copy. But we never really built any protocols to take advantage of that. Maybe that's something we should think about. Another thing that we could have done is put crypto into the system sooner than we did. And a lot of people come and say, blankety blankety idiot, why didn't you put more crypto in at the beginning? We wouldn't be in such a mess that we are today. I don't actually think that's true. But at the time, in 1976, when Whit Diffie and Marty Hellman published their first paper on new directions in cryptography, it was stunning. And I'm sure the guys in the UK were especially stunned because they'd invented this idea in 1974. But they didn't tell anybody because they didn't want anybody to know about how clever this was. So anyway, the paper gets published and the next year or so, in 78 or so, the RSA algorithm to implement this idea gets invented. Now you could say why the hell didn't you just immediately implement the public key crypto? And my reaction, remember, it's 1977. I'm trying to get the damn thing demonstrated and implemented on as many operating systems as possible. And I look at the RSA idea and I said, this is retrofitable. I can't put this in any time. And so I wanted to get the system up and running and demonstrated first. So we did that. We were actually working using DES, which is a conventional symmetric key to build a cryptographically secure system. And we were demonstrating that with a program called Black Crypto Red. I'm sure some of you know, the red side of the net is a sensitive side. The black side is post-crypto. And we stuck DES in the middle. But key distribution is not nearly as nice with the symmetric keying system as it is with the public key system. But anyway, so we were clearly working on the crypto support. And of course, the NSA was busily doing some of its development work as well. But I remember thinking at the time, okay, if I were serious about insisting on cryptographic implementations, who are the users of this thing? It's graduate students. And I don't mean any offense because I was a graduate student once too, but I can't imagine graduate students being really good about key management and all the other things that you have to do. And so it kind of was comfortable not doing that and not insisting on it too early in the game. And of course, multi-factor authentication would also have been a good thing to have thought about if we had any technology to support it because even then everybody knew that passwords were a terrible idea. So here we are. We really want to add more security into the system. And so from my point of view, we really need work on VGP. I'm sure all of you know how easy it is to either be hijacked or just make a dumb mistake configuring something wrong. Almost all the really bad stuff that happens most of the time is somebody just making a mistake. Sometimes it's not, somebody did it on purpose. RPKI is another thing that I would like to see more implementation, Ditto DNS seconding. So those are all things that we should be working on. I also think that more strong authentication in the system all the way down to identifying a hunk of hardware and being able to authenticate that it is what you think it is would be very helpful. And the internet of things really, really needs something to keep your simple IoT devices from being hijacked. Like what is that? Half a million webcams that were hijacked to do the DDoS attack against Dyn, which had cascade effects because Dyn was doing everybody's domain name resolution for a lot of very important companies and they all disappeared off the net because Dyn fell over. Which by the way, hang on to that thought for just a second because if you're like me, you may be needing a change of underwear because if you think about how dependent we are on certain key parts of the architecture and you can see when they don't work there are cascade failures. If your mobile doesn't work, your battery's dead, can't get a signal, some other problem, then you can't do two factor authentication maybe or you just can't get into your email. So the business deal you were about to close doesn't happen. More than once, even just over this weekend, I had to go through a whole series of login and authentication steps, some of which involve the mobile. I now got two devices that I have to make sure work. Everybody in this room knows that the probability of success is multiplicative. So 90% successful times 90% successful is 81% successful if you rely on both things working. And if there's a third thing, it just gets worse. So we are steering right now into a very fragile future in my opinion and I hope those of you who are thinking about software development, architectures and things like that will really give some thought to making this more robust. This isn't just a question of more security. This is more resilience, more alternatives like every screen ought to be useful for this authentication if you need it, things like that. Now some people have criticized the basic assumption that we made that every device on the internet should be able to talk to every other device. And the reason that we chose that as our principle was that we didn't know which devices we're gonna need to talk to which devices at the time. And so we had no rationale for inhibiting communications. Now today we look back and realize that since everything can talk to everything, then the bad guys can talk to everything and they do and they cause trouble. So it could very well be that we should reconsider how to hide parties from exposure. And there are some suggestions and the sort of clean sheet designs that are coming to that. I've already mentioned about the dependencies. And I'm also very conscious of the fact that I can't even read my own watch. Now those of you who have good eyes will know that this is a Ronald McDonald watch. And I only have 10 minutes left. I got this for teaching a class in networking at Hamburg University just outside of Illinois. I am not kidding, there is a university. They teach people how to run McDonald's and they were about to network them all together so they could keep track of the sales and the use of resources and everything else. Instead they gave me a Ronald McDonald watch to commemorate the thing. So anyway, let me just riff for a second on open source and open standards because that's what you do. And I tell you what you do is really important. I still consider it to be one of the central engines of the evolution of the internet. You're willingness to share your code and your thoughts and your ideas and everything else. It's wonderful. Think about what happened with the World Wide Web. There was no class in being a webmaster. But what you got to do with the browser was say show source. So you could see how did they make this really cool web page because you could see the HTML. And so lots and lots of people learn how to be webmasters from each other. And that's what you do. You learn from each other which is great. It accelerates the pace of development. It also, open source also gives you an opportunity to find bugs. Now there is a little problem with this. Sometimes open source leads people to think it's open source, everybody's already found the bugs because it's open source therefore you don't need to look. Yes, you can stop laughing now. So I do worry about that and I have pushed very hard wherever I can to argue that we need to support people like you more so that you can help us make more resilient and more stable and safe software. It's really tough because the curation of this software is really important. Some of the bugs that some of you know have lie around for 20 years and they just don't get noticed until they surface at exactly the wrong time. There's lots of supply chain risk factors that are involved because the open source software could be anywhere in the stack and a bug that gets in deliberately or malware that gets in deliberately can be really troublesome. So there's a real challenge for us as a community to support open source software in a sustainable way so I just want you to know I'm a big fan of trying to find ways to do that. We need this stuff, the standards and the open source for interoperability. Some people say standards inhibit competition but I don't agree with that. I think that having commonality and interoperability allows for competition on top of the standardized platform. And of course we all need to remember that we want to adapt our software to whatever new platforms that come along, Kubernetes and containers being a very good example, virtual machines and so on. This is a slide which I don't have time to talk about but I want to just pause and ask you to look at it for a second because these are all problem spots and they are not the full list, they're just some of the challenges that lie ahead. Some of them are really tricky because they involve international agreements of some kind whether it's treaties or norms or something else. Think for a minute about the world but we wish we lived in one where accountability is enforceable and that agency is given to people and companies who are using the internet. Accountability and agency and in order to make that work you may have to give up some anonymity because if you can't hold people, you can't hold anonymous people accountable and so if they're deliberately doing harmful things you have to have a way of tracking them down. It's a little bit like license plates, it's not a perfect analogy but license plates are gobbledygook except mine which says serves up. So most of them are just random stuff but the police department is permitted to penetrate the veil and find out who owns the car. That may not be the person who was driving the car but they can penetrate that veil because it's their job to hold people accountable. So I think you should think a little bit about that. The other one, the thing that the last two bullets especially I want to say straight to you that getting rid of bugs is really important. Making mistakes is easy in the software world. We do dumb things, you know, like buffer overflows or off by one bugs or hey, we just read a variable and did a compare and a branch on it except nobody ever set the variable so it's a random number that gives you really predictable behavior. So there's an ethical component here. Every one of us who writes software has an ethical responsibility to do the best we can to make it safe and secure and reliable. Now in all fairness, the programming environments that we have are not exactly helpful in that endeavor. I was gonna say suck, but you know. But this is a challenge to the academics especially to figure out how to design and build programming environments that actually alert us to the dumb mistakes that we might make. So I think this is the last slide, final thoughts. And I am happy to do some Q&A if you have any. So first of all, we have huge challenges to keep the internet open, safe, secure, sustainable, reliable, and connected. And the reason that this is a big challenge is that governments around the world recognizing that there are problems in the online world or harms are being committed against corporations and people and they want to do something about it. Now some governments are trying to protect the government other governments are trying to protect the citizens. But it's a big challenge when the governments are trying to enact laws that aren't necessarily implementable from the technical point of view. So we're faced with trying to preserve the value of this connected internet while we're protecting people from harm. I also think that your work in open source is important for digital preservation. I want you to imagine you have a lot of digital stuff and you want it to last so that your great-grandchildren can have access to it. Well you know when you have digitized anything you need software to help interpret it most of the time what if it's a spreadsheet or a photograph. If you can't run the code that created the digital object a hundred years from now, then it may not be available. So we have a huge challenge in maintaining the accessibility of digital objects over long periods of time. There's something ironic about this. There are clay tablets in cuneiform that were written 4,000 or more years ago. And if you can read cuneiform which maybe three or four people in the world can you can still read that clay tablet because it was a warehouse receipt and the warehouse burned down and the tablet was baked and that gave it longevity. Then there's vellum, sheepskin, calfskin. And that stuff lasts easy a thousand years. And if you keep going forward to recording media and then you get to five and a quarter inch floppies, three and a half inch floppy CD rooms and they last for one, two, three, four, five, six maybe 10, maybe 15 years. How about seven track tape? Nine track tape. So we have a big problem and that's preserving our digital future. And that is going to mean that old software needs to keep running somehow. Interpreters, emulators, all those sorts of things. So we really need to do that. And the second of the last bullet talks about accessibility and by this I mean making software accessible to people with disabilities. I wear hearing aids and I warn them for about 65, 70 years but there are people who have all kinds of vision problems and motor problems and everything else and they often get the short end of the stick when it comes to accessibility. It's hard to do that. It's not falling off a log. You really have to think about how am I gonna make this application work for somebody you can't see or can't hear or doesn't have fine grain motor control or some other problem. But figuring out how to do that is worthwhile because we're losing out on talents of people. Just because you can't see doesn't mean your brain doesn't work. And finally, making open source sustainable and trustworthy is the task beholden to you. So I apologize that I didn't leave any time unless you wanna stick around for a bit for Q&A but I really thank you for listening and I thank you for what you're doing. Keep that. Thank you. Thank you. I am confident that a few people will take your offer up for a Q&A. Okay. As long as it was a genuine offer and you're ready to spend a few more minutes with us. And it seems like it was. So I think Hannah, you are somewhere in the room with a microphone. If you would like to ask a question of Dr. Surf, please. Yeah, let me warn you that if I have trouble hearing the question, don't shout. That doesn't help. It's just a matter of clarity. But the nice lady with the microphone is wondering any of the aisles. Yep. So we have the first question. Usually I run around like her all though with the microphone. But if everybody's wearing a mask, it doesn't do any good because you can't lip read through the mask. So not so much a technical question but having done all of this and let your life story, what's next? Oh, well, I didn't tell you about the project that started in 1998 at the Jet Propulsion Laboratory. Just after the Pathfinder landed in 1997, after 20 years of failure to get to the Mars. Remember the two Vikings in 76 and then nothing works. 97, this little rover lands successfully. Everybody cheers. So I show up at JPL the next spring and I get together with a team that did the comms for the Pathfinder. And we spend about a couple of days together and we're trying to figure out what should we be doing now? That is to say 1998, that's gonna be needed 25 years from now. And we said, okay, we want an interplanetary backbone network. So we started working on the design of the solar system internet. We are now at the point where we have new protocols that have been standardized by IETF and the consultative committee on space data systems. We are running on the International Space Station. I've been there for a decade. We've done deep space tests with some of the NASA spacecraft that have visited the comms and things like that. We are prepared to go to the moon with the Artemis mission. There's a Lunanet design that Goddard Space Flight Center is working. We are working together with ESA and JAXA and Cary as well and of course NASA. So this is all coming together. There is a group, if you want to look closely, it's IPN, Interplanetary Network, IPN-SIG, Special Interest Group. IPN-SIG.org is a chapter of the internet society. So we're located nowhere on Earth. We are in the rest of the solar system. I've been trying to get JPL. They issue me a badge that says Resident Alien which should be really cool. Fun fact, several of the folks that are running our AV here today are involved in some of the fun and exciting missions at NASA JPL. So thank you all for running those cameras that are letting us share this talk on the internet. Thank you. Yeah, perfect. So we've got one more question in the back. Yeah. So there was a point on the previous slide talking about accountability and accountability and traceability or something of that sort. Right. And it's talking about how we wanted to have, to make sure that people were behaving well and to protect corporations, governments and individuals from these bad actors. A better question from my perspective is how do we protect ourselves from government bad actors? Also, very good question. Again, interestingly enough, accountability makes sense in that context too because we would expect governments to be accountable for what they've done. Not all governments are willing to be accountable. You and I would probably agree on that. But if we establish a practice of accountability and we insist that that's part of the architecture and design, then at least we have a shot at it. I would not stand here and say, I promise or guarantee that that will work but I believe that we should hold ourselves to the philosophy that people should be accountable and government should be accountable for what they do. And of course, one way to do that is to vote. Hi, so it's starting to look like with the rise of VR technology and AR that we're going to be in a lot more of virtual worlds but we're having a lot of problems just deciding on what standards to use and even the underlying communications technologies. Do you have any advice, tips, tricks that we could possibly use while creating these? So remember, I probably don't have any better idea than you do so be careful whatever I say you should take with a grain of salt. I had the impression that there was some really interesting ideas a while back. Remember Second Life and VRML? Now I don't know VRML intimately so I could be saying, look at that thing and you'd say, I looked at it and it's a pile of crap. But there have been attempts to find ways of sharing the descriptions of multi-dimensional spaces. I think we're gonna have to experiment with this a lot. Here's the one thing I'm hoping for. Lots of people are gonna try 3D stuff because it's all very exciting. We got chipsets that can do it and we got headsets. Oh, that reminds me. This is gonna be weird. We've been spending the last two years doing the video conferencing, Zoom and Google Meet and all these other things. And people are saying, yeah, I can hardly wait to do this with the 3D headsets. And I'm sitting here thinking, how's that gonna look? Because you got the camera here but you're wearing this thing and so you look like Darth Vader. In fact, we all look like Darth Vader. And so in order to do this 3D conferencing you're gonna have to have an avatar which means it's a new business model, right? You can have my avatar can have hair, for example. And you can rent suits from Ralph Lauren. I think we're a long ways from getting this right. The one thing I will notice is that the headsets have got to the point where there is less problem. People get nauseated because they get signals, proprioceptive signaling. Isn't that a great 50 cent word? The proprioceptive signaling doesn't match. Your eyes are telling you're going like this and your body says you're standing here like that and your brain says this doesn't make any sense. And so then it makes you throw up which is, you know, thanks a lot. So I think we still have a ways to go but it's getting better. But I think there's a whole ecosystem waiting and a lot of design and standardization. I would love it if it turns out that those three dimensional environments could interwork somehow. But that would require a lot of agreement and cooperation and commonality of headset capability and coding and everything else. So guess what? You get to try that out. It's gonna be cool. You got one more here. It's up in front and then one on the side. Well, you can yell but we want other people to hear you too. I understand that. So you mentioned supply chain issues and specifically provenance, software provenance. Do you have any thoughts on systems like Nix, Gwix or Gytian? Have you heard of any of those? Those are not familiar. So if you wanna say a little more about what they are supposed to do, is that? Well, Nix is about reproducible build systems all the way to the executable. So essentially your hash is part of the binary. It's executable. Yes, got it. So I like that a lot. I mean, I like the fact that you can't make alterations without there being visible. You can make the alterations, but if you check first, there are a lot of hardware boot systems do that, right? So I'm a big fan of that sort of thing. It's another example of trying to make sure that the devices are equipped with the ability to figure out where did this stuff come from? Has it maintained integrity on its journey before I actually loaded in and booted up? So I like that idea a lot. I think we have one other question over there and then we need to wrap up. No problem, so thank you. Thank you for answering my question. This is actually confirming an anecdote that Professor Douglas Comer told our class on the construction of the TCP IP protocol when in discussions with the DOD and that one of them thing was running around a city that has been destroyed by nuclear war. And I was wondering if you could confirm or deny any of that. Right, okay. So there's great confusion in the history here. Paul Barron, when he was at Rand Corporation in 1962 to 64, published an 11 volume series called Undistributed Communication. He was talking about mesh networks. This is 62. Mesh networks and packetized, didn't call it that, call it message blocks for digital speech. He imagined routers, he didn't call them routers, either relays on the tops of telephone poles across the country. And so this big mesh, you could blow holes in it, but as long as there was some connectivity, stuff would get through, we used hot potato routing which basically was send it to everybody and try to get it to the destination. So that was his model and never got built, but it was published. Then ARPANET comes along now. The ARPANET was driven by an economic requirement. ARPA was spending money on a dozen universities to do artificial intelligence and computer science research in the 1960s. And every year everybody said, you have to buy us a new brand new world-class computer so we can keep doing world-class research. And ARPANET said, we can't afford that. So they said, we're gonna build a network and you can share. And everybody hated that. But they said, don't worry, we are going to fund all of you. So you don't have to hide your results in order to have an edge on next year's proposals. We want you to share your results, share your software and share your computing capability so we can accelerate the pace of artificial intelligence and computer science. So they in fact did build the network but it was not based on nuclear holocaust or anything, it was just based on trying out a packet switching technology which we believed at the time would work a lot better than dial up a computer, send some data, hang up, dial up another computer, send some data and hang up. We didn't think circuit switching was gonna work for our bursty kind of applications. Of course, the telecom cannon of the day was, of course you do circuit switching. That's how we've been doing it for hundreds, whatever it was, 70 years or 50 years. And so we asked AT&T, would you like to participate in this thing? They said, no, it won't work. But we'll sell you dedicated circuits if you like so you can build your stupid network. And so they did and we did and it worked. So now when the internet comes along, I was worried about exactly the problem of recovery from major failures. But and we even did an experiment to figure out what happens if you had a partition network. Radia Perlman figured out how to do the routing system to recover from a perforated or bifurcated network. But in all honesty, none of those protocols were ever tested in the kind of nuclear holocaust that really blew a lot of pieces apart. I actually did, however, just to demonstrate it, fly packet radios in the strategic air command bombers and basically cut up pieces of the ARPANET and then glued them back together using ground and air-based packet radios. Just to demonstrate that TCPIP would link the pieces back together again. But it was not mature enough to deal with a serious post-nuclear scenario. So the real answer is none of this was really built to do that. It was built to figure out how to get computers to talk to each other. Okay, gotta wrap it up. But thanks again. We really appreciate the opportunity. Thank you for joining us and for sharing with us and for helping to be part of creating many of our careers. So with that, scale is officially over, at least for those of you that are attendees, for those of us that are wearing orange shirts and jerseys and other embroidered materials, we have a couple more hours or days of cleanup and wrap-up. If you wanna make that easier for us next year, we are always looking for additional volunteers and team members. But with that, we will see you again, March 9th through the 12th in Pasadena. It's only about six months away, which I think I just scared my team quite a bit. Nine months away. They're like, they didn't want me to steal three months. You can be part of that, make this journey easier for the rest of us. Thank you very much.