 Hello. Welcome. Wow. I had a bit of a moment when I realized I had to do this, that I had not to fangirl you, like on stage. I'm going to do a really bad job of that. What Vin hasn't told you, which is to me the most in some ways remarkable bit about not only being the father of the internet, he landed in our country at eight o'clock this morning, got to Canberra at 10.30 this morning, had a lunch forum with a whole bunch of people from 12.30 until two, and it's now here and just did that. Yeah, exactly. There's a secret to this. Oh, it just involves coffee? One secret is stay on Vint Cerf time. And the second one is 900 mile an hour theory, which is once you get going up to 900 miles an hour, don't slow down and don't stop because you'll just fall over. Just keep going. I'm not sure that scales quite like the internet does. For me, there were four things I really wanted to ask you. And we're going to see what can get through it in time. First thing was, we talked about this a little bit earlier today, but one of the things I'm always amazed by is people's job description of what that means you do every day. So Vin, among many other things, is the chief evangelizer of the internet. Chief internet evangelizer. Yeah, I don't evangelize about everything, just about the internet. Yeah, well, I do know that means every time you go to Russia they ask you if you believe in God. I'm not going to ask you that. But I am going to ask you, what does a chief internet evangelizer do on a day-to-day basis? What's that job and how do the rest of us get it? So I mentioned earlier today that I didn't actually ask for the title. The title I asked for was Archduke. And the Larian-American surrogate said, you know, the previous Archduke was Ferdinand and he was assassinated in 1914 and it started World War I. And maybe that's a bad title to have. So they said, why don't you be our chief internet evangelist? And I said, okay, I can do that. So the primary role, frankly, is to work to get more internet built around the world and that means going to places where there isn't any. So you've come to Canberra. Well, I heard about the NBN. We'll have this act all night. So in fact, though, believe it or not, only about half of the world is online and there's still another 3.8 billion people to convert. There are places where the governments need to be persuaded that either investment by the government is needed or at least policies that will encourage investment are needed. But even more important I think now is not just the building of infrastructure that can support internet but also attention to making the internet useful for people, you know, quantitatively as well as qualitatively useful, whether that means improved healthcare, improved income, improved living conditions, a variety of things that we would want the internet to demonstrate in a concrete way information of local value, information in local languages, things like that. So that's part of my pitch when I go around the world talking about policy. I've been in the research group at Google as well since my arrival there in 2005, although I recently moved into the policy organization directly. And so I've been party to some of the really interesting research ideas that have come along, many of which have emerged out of internet applications that we and others have been exploring. So this has to be one of the best jobs in the world. Mostly I have the freedom to poke my nose into almost everything and learn. And I don't know about the rest of you, but the more I learn, the more I realize I don't know. And then I regret that I have less time than I would like to learn the stuff that I know I don't know, but you know, that's just the way it is. Sort of the human condition. I'm afraid so. So will you talk a little bit about where Google's been going with ethics and AI? I know if you picked up a newspaper in Australia or frankly in many of the places one might spend time, artificial intelligence has been part and parcel of the conversations we've been having. I think there's been a lot of debate about what it means to think about ethical AI, what would it mean to think about a framework for that. And I noted with some interest that Google had published their first framework for this earlier this month. And I wonder if you could talk a little bit about how that came to be and what you think it stands for and why that matters. So first of all, I have to admit to you that for a long time I always thought AI stood for artificial idiocy. And you know, there are some of the strange things I mentioned earlier, the translations that don't quite make it properly. On the other hand, we have demonstrated some really extraordinary capacity, for example, to understand and distinguish between cells that are diseased and cells that are healthy, or to detect retinal, it's a diabetic retinopathy, for example, by looking at retinal scans and distinguishing a person with a disease from one who doesn't have it. Those are all machine learning mechanisms. So I have very optimistic and positive feelings about some of those applications. But at the same time, we and others are seeing the need for transparency. Because if the algorithms don't work the way we expect them to, we want to be able to understand why not. We want other people to be able to understand that. With regard to artificial intelligence in particular, we recently published a set of principles that we propose to use to assess research efforts and business efforts that use artificial intelligence as an important ingredient. For example, a self-driving car company, Waymo, which is a part of the... Google has now been structured as a holding company called Alphabet, and Google is one of the companies in that holding company. But Waymo is our self-driving car part of Alphabet. Calico, the California Life Company, is another one. Verily, which is a medical instrumentation company, is yet another one. Calico is interesting, because they notice that people get old and they're trying to figure out how to stop that. And of course, I was afraid the engineers would take the obvious solution. Okay, next problem. That wasn't what we had in mind. So the AI that we're pursuing is intended to produce, let's say useful results that augment our own capacity. This goes all the way back to Doug Engelbart and J.C.R. Licklider and their belief that computing in its varied forms would allow us to do things that we couldn't normally do as human beings. If you think about searching the web and the mechanism that's required to do that, the crawling that goes through with our computers to look at every web page, find every word on every web page, index every single web page, and then help you find the web pages to have the words that you're looking for. That's a task that no human being could do, but a bank of computers can do that for us and help us find what we hope is useful information. So I think that we're heading down what I think is a thoughtful path to make sure we don't oversell and over-depend on these AI and machine learning mechanisms. I hope we're doing a job to educate people about not becoming overly dependent on these things and to have some healthy skepticism about the efficacy of these kinds of techniques. Nonetheless, I am persuaded, as are many others, that these are really powerful augmenting capabilities that we should not ignore and should try very hard to employ in constructive ways. So can you talk a little bit about... I'm interested in why companies want to regulate that and how they'd go about doing it. I know that the principles from Google and Alphabet, or there's eight or nine of them, write about what AI should be in the service of. So in the service of humans, in the service of scientific excellence, designed for privacy by principle, do you have a sense of how that's going to play out for other companies and institutions should be doing that? So I'm happy to speculate a little bit, although I'm not sure that I can predict anything, considering I screwed up the address space so badly with the 32 versus the 128. You were close with the 122 countries, sort of. First of all, I think that it is indicative of the times that Google and others are starting to realize that it's important to think about the consequences of these technologies and not to simply get excited about making things do something new or do something different or do something faster. And so the fact that we're having that dialogue, for me, is very encouraging. Educating the general public to be a little skeptical of this is equally important. And although you didn't exactly ask this question, like every good graduate student, you distort the question around until you can answer it. So I'll do that, too. You didn't mention Bitcoin, but it's a very good example, or just generally blockchain is another example of hyperbolic technology references just like AI and machine learning become very hyperbolic. We should be a little suspicious of all of those things. And although it's good to encourage people to think, you know, out of the box, to think in terms that are non-conventional, at the same time, we shouldn't imbue these things with magic because they're not magic. So people like you who watch with, you know, the human condition through the anthropological eyes can help us a lot in two ways. The first one is to remind us that there have been other advances in technology that have changed the way we live our lives. And, you know, it wasn't the end of the human race and it wasn't the end of thinking. Can you imagine some people, you can imagine this scenario, somebody invents writing and somebody else comes along and says, oh, my God, that's the dumbest idea I've ever heard. People will never remember anything. They'll just write stuff down and then they won't have any memory left. Well, writing has turned out to be a very important part of human culture. And the thing I like the most about it is that it lets you communicate with the future even when you're not there. Of course, you don't get to go back the other way, but at least the idea that you could communicate with someone a thousand years from now, for me is really exciting. Of course, that's presuming that anybody cares about anything I wrote a thousand years from now. But this notion of being able to retain me is very exciting. I'm looking to you, actually, to help us put in perspective some of these dramatic technologies that show up and maybe help us understand whether we should be fearful, whether we should be ebullient, whether we should be skeptical, whether we should find a way to be realistic about it. But that's, in a way, something that you can contribute to our understanding of what these technologies might do to our culture and our society. And something else you said earlier today over lunch was that we do not have a single global culture. We have many cultures and they interpret and use technologies in different ways. And understanding that we are not uniform is probably yet another gift that anthropologists like you can offer to people who are trying to fashion technology for different organizations, different groups of groupings of people around the world. Which is embarrassingly the first time I met you, which happily Vinn does not remember but I remember vividly at a conference 12 years ago this week, actually, in America. I was the over-dinner keynote, which is never a good place to be because everyone has dessert and you're not a lot less compelling than a keynote talking about the relationship between technology and culture to a group of computer scientists. And I was basically trying to argue that culture mattered and shaped the way technology used. And I will confess at that point I'd been in the tech field for about seven years. I knew as much about the internet as an anthropologist might under those circumstances. And so this man in the front row puts his hand up and everyone kind of stops. And he says, are you suggesting that technology shapes culture? And I went, yeah, I think so. And he's like, I'm unconvinced. And I'm thinking, uh-oh. And he's like, do you have other examples? And I'm like, yes. Many examples follow. And he's like, hmm, are you suggesting that different technologies would turn up differently in different cultures and like kind of. And he's like, mm-hmm. And this goes on for 15 minutes. And I was getting a bit balshy because Australian anthropologist and finally I decided I'm now the only thing standing between a room full of computer scientists and an open bar. And I called time and basically said it's been lovely chatting. I should go now. And I come off stage and someone looks at me and says, oh my god, how could you talk that way to Vint Cerf? But it embarrassingly gets better because I say, who's Vint Cerf? And they said the man in the front row and I'm like, I know it was the man in the front row but who's Vint Cerf? And my colleague looks at me and says, he invented the internet. And I think I said at this point he has a lot to answer for. Exactly. No, it's a terrifying thing to admit that you didn't know who it was. But it does lead me to my next question. Having embarrassed myself publicly. Which is, I mean, I know there were lots of people involved in how we got from the DARPA net and ARPA net to the internet to where we are now. But looking back on that, what do you think the thing is that most frustrates you about that piece of history? I mean, I know you went through the unfinished business. Well, I can think of a fairly broad range of things that are frustrating. Regulators who don't understand the laws that they're passing and don't understand the technology. In the US, I've been wanting to write a congressional comic book about how the internet works just so that they can understand what they're talking about. We'll find you some other audiences for that. But I think if there's a frustration here, honestly, it is discovering that human beings are not perfect and that many of them don't have your best interests at heart. And they will use these technologies to harm people in a variety of ways, whether it's financial or psychological, cyberbullying. There's a long list of things that you're all familiar with. And I don't see any obvious way to inhibit that. So in some sense, here we are. We have this wonderful platform which enables a kind of collaboration and sharing of information like nothing we've ever seen before. We have computers helping us find information and interpret it and maybe massage it in ways that give us new insights. All these wonderful and positive things about this new environment. And yet we have people going around doing all these other bad things. So it's almost an annoying distraction to have to sit back and say, how do I inhibit some of those bad behaviors? How do I detect it? How do I make it harder for people to do identity theft and do all the other things that they do? And for me that is a source of frustration because I'd much rather put energy into doing more constructive kinds of projects. And yet I know that if we don't solve those problems that people will lose trust in this environment and won't want to use it and will lose whatever advantages there are. So I guess this is frustration with the human condition. There's not much to be done about this. Well, there are three things that we can do, actually, now that I think about it. This is my nostrum for the evening. The first thing you can do is to find a technical means of inhibiting the bad behavior. Sort of like, you know, brothelizers in the car. People find their way around that or cars that are smart enough to know that they should stop if they're about to run into something. Those sorts of technical means reduce the likelihood of bad behavior causing trouble. Then there's what we could call post-hoc enforcement. We could tell people these things are anti-social. These things are not accepted in our society. If we catch you engaging in them there will be consequences. But if we catch you, so not everyone will be caught, but we make laws and we try to enforce them to say that. If we catch you doing these things there will be consequences. If there are things that we could agree on globally that we all agree is bad for society, then maybe we can even have some global capacity to visit those consequences on people that cause harm. And then the third thing we can say is don't do that, it's wrong. And I know that sounds wimpy, but I want to remind you Brian will recognize this argument that gravity is the weakest force in the universe that we know about right now. And yet when you have significant mass it's a very powerful force. So when social gatherings when when polities agree on certain principles that can have a very powerful force on individuals. It's peer pressure, it's norms. So we're starting to see some norms beginning to arise in this online environment which I hope will help to corral some of the baser behaviors that we encounter. So we had a long conversation at lunchtime with a number of Australian leaders and I think running through that I heard a real threat of anxiety I don't know if you did about the future about technology about what it means for our societies our families, our cultures all that kind of thing. And I was really sort of struck by thinking about it's easy to have that conversation it's much harder to get to action and whilst I might be an anthropologist in a university I'm still kind of oriented to the notion that we should build a future we want to live in rather than agonizing about what it might be like. And I guess I wanted to take advantage of it being you and it being here to say what's the way we can do as individual citizens or groups of citizens to help build a world we want to live in. So there's all the ways we can critique it but I also think there's the kind of you've got what about 300, 400 people in this room you tell them they all need to call their service providers and ask about whether they have IPv6 IPv6 yes I wrote that down you will go do this but I'm wondering what else that may be a million messages going to Telstra for all I know but that would be good but that's okay I'm wondering what other one or two things you would suggest that we could do as citizens not as consumers but as citizens to help kind of activate things and have a little less anxiety and a bit more action. Well the first thing would be be serious about digital literacy and let me just give you some examples of what I think a digitally literate person would understand the first one is the fragility of digital content the fact that even though it seems like bits are ethereal and they would never wear out the fact is that the medium that they're put into is not guaranteed to last for very long and as an anthropologist and maybe even as an archeologist you could realize that even though we record things digitally like all the pictures that we take on our mobile phones and we seem to pretend like they'll last forever they're always there whenever we're looking for them until they aren't and so digital literacy in this case means knowing and understanding that these media need to be catered to they need to be you need to copy bits into new media to be honest when people say what should I do with all my digital photos I tell them take the ones that you care the most about and print them on really good quality photographic paper and the reason I tell them this is that we know for sure that those will last at least 150 years we're less sure about any digital medium but I can guarantee you I'm thinking and working very hard trying to find ways of assuring preservation of digital content we need business models we may need to emulate old hardware to run old operating systems to run old applications to resuscitate digital content so there are a bunch of things that need to be done along those lines and you should be conscious of that you should be aware of that risk I think also so wait let me just pause there one strategy for managing a digital world is to make it physical again I'm sorry one way to manage the digital world is to make it physical in a sense yes the digital world is physical I mean we think about it as metaphysical thanks to Gibson and cyberspace but it's realized in physical devices that work the way people like Brian understand I don't but Brian does I mean all these little funny things going on tunnels and so on and protons and neutrons and electrons are buzzing around and doing their thing and then they don't and when they don't the digital stuff disappears so it's manifested in physical ways and we need to literally re-juvenate the content in different media as time goes on as technology changes but the other thing that I would urge you to keep in mind is this critical thinking notion we are not strangers to critical thinking we're even not strangers to information overload even before the internet probably known in this room read every book that was published read every magazine, watched every movie saw every television show we didn't how did we cope how did we cope with this avalanche well the answer is we didn't read everything we relied on our friends to tell us things that they thought were worth reading we relied on sources that we trusted those are hard to come by these days to tell us that we should pay attention to this, this and this because it had good providence so I think we need to revive this willingness to look for trustable sources that help guide us to decide what we should consume what we should reject what we should evaluate we will do ourselves a big favor and we will probably manage to blunt some of the negative side effects of the abuse of social networks I think those are two nice pieces of advice think about digital literacy and work out how to activate and encourage critical thinking that's nice to be saying that inside Australia's only national university and I'm going to call time because you've been now up at the speed of 900 miles an hour for a very long time and I want to thank you so much for coming to the ANU for visiting us here and spending your time with us thank you very much ladies and gentlemen we have one last piece of business it's my very great honor to deliver the vote of thanks and make some closing remarks for today's session so thank you very much for a truly inspiring and extremely interesting talk it was a fascinating and candid conversation with Genevieve and today's date for your presentation is actually a very auspicious one three days ago was the 23rd of June and the 23rd of June is special for two reasons the first is that it's the international day for engineering women yay and the other reason is that 29 years and three days ago the internet came to Australia oh that's right wow and the story of how it came to Australia is actually an interesting and important one and like all good stories it starts with people complaining about ANU professors and we go back in time to the 1960s when ANU professors used to come back from America telling stories of strange and mystical lands where Vint and his colleagues were making the internet and they would come back and they would say this is a really remarkable and powerful thing and we need to start thinking about this fast forward to the 1970s and they were starting to get a little bit agitated they were concerned that Australia was missing out and they were concerned that really important things were happening and we were going to lose our best and brightest to the rest of the world I suspect actually that what was actually going on was that they were yearning to join that story in the 1980s and people had started to take action smart people were starting to build local networks usually in universities and those networks were hand written for the hardware that they sat on they were not connected to any other things and in fact there were stories of people having eight computers on one desk so that they could actually talk to all eight networks in their university and then we come to another hardy band of pioneering Australians who decided that they were going to get over their institutional and technical differences and like good pragmatic Australians they were going to cobble together a coalition of people from universities they got CSIRO involved and they went to the Australian Research Council to seek funding to create Australia's first internet and so the internet came to Australia for the princely sum of 1.77 million dollars and then two blokes in an office at the ANU's computer centre bought 30 odd routers, got in a car and drove around Australia for six weeks and that's how we got the internet that's a great story so it's a very typically Australian story and one of the things to think about there is that in fact there was very limited strategic insight and direction from government what was happening was that there was two generations of pioneering Australians who wanted to make this happen the rest as they say is history so traffic on that nascent internet doubled every eight months for the next five years and suddenly the vice chancellors of Australia realised that they were sitting on top of a very large commercial company and they didn't actually feel qualified to operate it so they sold it and interestingly they sold it to Telstra who in the early 1980s wanted no part of this thing neither did government and so Australia got an internet because of fearless farsighted academic leaders as well as pioneering computer scientists and that is an interesting tale for us to understand you will have noticed that we got the internet 20 odd years after the rest of the world did and so I'd like to conclude by thanking you not just for your talk not just for the internet but for creating worlds that visionary pioneering Australians wanted to be part of and you've told us another tale of the next wave and I can assure you that sitting in this room are more pioneering visionary Australians who want to join you in remaking the world around us so please ladies and gentlemen join me in thanking Bint one last time