 That's an embarrassment of riches that ever a woman has had them. So I too need to begin with the appropriate set of acknowledgments and thank yous. I want to acknowledge that we're on none of our country and it's really lovely to be welcomed back to country. It's one of the things I've most missed living overseas is the capacity to acknowledge the people on whose country we stand. So thank you for that Wally. It was great. And we're not related. I should hasten to add though I wish perhaps we were. I also want to thank the ANU for, well I don't know Brian, for putting a hard word on me and telling me I'm needing to come home. And to Adrian Turner wherever I know he is and Data 61 for their partnerships and for Intel who refuses to let me go. So I get to have all the logos in the world attached to every talk I give. And I want to thank my extended family and especially my mother who are here in the audience today. So no pressure. All right. So notes on an applied science and there's a bazillion ways that I can be introduced and I now realize I have an incredibly long litany of titles that will no longer fit a business card. And we can talk about being a professor of engineering and computer science, being at Intel. But frankly I think the piece that's most important for the conversation I want to have here is one that starts by putting a flag in the ground that says, and I'm also an anthropologist. And I'm the daughter of an anthropologist. And I grew up on my mother's field sites in central and northern Australia in the 1970s and 1980s. And when I wasn't there I was here at the ANU. I have the classic Canberra pedigree, Turner Primary, Lyonham High School, Dexon College, Department of Administrative Services and local government. Before I decided that if I didn't get my way out of Canberra I might never leave. I took myself to the United States. I ended up doing my PhD at Stanford in Native American Studies, feminist and queer theory. You can see how one would quickly get from that to a job in industry. I was hired at Intel back in the late 1990s after I met a man in a bar. This is not career advice. I hasten to add. But in the late 1990s hanging around in bars in Silicon Valley was a particularly good way to get a job. And indeed I did. I've spent the last 18 years at Intel in Silicon Valley and beyond where my job has been to think about what the role is of people's lived experiences in building new technology. How do we think about what it is that people care about? What frustrates them? What they're passionate about? What matters to them? And how those things can in turn drive the next generation of technology development. And I spent 18 years getting to do that in the basically epicenter of the last technological revolution. And yet I left and came home. Increasingly I'm beginning to think my life has just been one big fang around the block. And I have come back to Canberra but I came back with a reason. Which is that although I've been at the center of building new technologies I increasingly think there's another conversation we need to have that we haven't been having. Which isn't just about the technology. It's about what does it mean for us to live with that technology and what would it mean to take a piece of control and agency over that technology and start to shape the direction we all want to go next. Which leads me to the incredibly balshy thought that I actually want to build a new applied science. When I put my hand up and said I wanted to build an applied science one of the reactions was people saying to me that seems a bit ambitious. The silent word there was dear. Let's be clear. Nonetheless I said OK I actually think you could build an applied science and there is a long history of that having been done. And the notion of what an applied science is is in fact a thing that particularly the ANU but many of us in the room are engaged with the idea that you can have both theory and praxis. So a body of knowledge and a notion that it should be applied. The idea that there are moments in time when the world changes either in its technological infrastructure or its social one or some combination of both and there needs to be a moment when you respond and react to that and say how might we want to engage with that world differently. And I think we're in that moment right now. And I say that in some ways having spent 20 years in the valley and knowing both where we have been and where we are going. I've been lucky enough to watch the internet and the web unfold and 20 years of computing technology that got closer and closer to our bodies and our lives. And the reality is whilst that was a remarkable transformation the one that we are standing at the beginning of is a bigger one yet. And you all know pieces of this puzzle right. We talk about artificial intelligence. We talk about big data. We talk about algorithms. We talk about autonomous and semi-autonomous machinery, self-driving cars. That entire constellation of things which up until now have seemed like individual technological interventions are starting to become a system. They're starting to work in concert. We can start to see what happens as data moves and circulates. We can start to see what happens as we have conversations about who owns that data and where it might be going and what's going to happen when it gets there and what it means to think about those technologies circulating. And for me, there's something about standing in that moment that starts to suggest we need more than the tools we have had up until now. And that there's something that says that set of technologies demands a response beyond the response that says, oh my God, we're all going to die or self-driving cars are the end of driving or artificial intelligence will kill us all or, or, or. There is a great deal of anxiety that circles here but I actually think there's room to make a more structured and deliberate intervention and I think it's about how you build an applied science. Now there's lots of ways to describe the moment we're in. The most popular one is to say it's the fourth wave of industrialization and that is seductive. There's something particularly lovely about that graph because it suggests it's all manageable. We've been here before, we know what happens. There'll be technology, there'll be a wave of industrialization and then there'll be another one. And it has this really nice periodicity to it. First wave, second wave, third wave, fourth wave. It's a comforting thought, right? You could describe all of this and go, yep, we know that one. Steam engines, mass production, computers, something a little more vague up here, cyber physical systems, but nonetheless that feels like a comforting story. Of course what it also does is make a fetish of the technology that sits behind it and it makes it all about the technology and it doesn't make it about what else is going on here, which is slightly more complicated, right? It turns out that if you take in some ways a step sideways and a step back from those waves of industrialization, what you see are dramatic transformations in social systems, in economic systems, in regulatory and government systems, in public policy responses. And you see that increasingly those no longer look like the technology. And then in fact what is going on there is that those technologies build and in turn are shaped by the worlds that come into existence. And so what do I mean by these kind of notions of systems and complexity? Well part of it's really straightforward. When you think about the first wave of industrialization, the first technology that usually comes to mind is the steam engine. Now of course the steam engine in and of itself doesn't create industrialization. It's when the steam engine is married into something larger, usually it turns out a train. So we have steam engine, then we have train, then we have railway, then we have transportation system. And by the time you get out, four steps removed from the original steam engine, we're now talking not only about a system but a level of complexity and abstraction that required a degree of management that hadn't existed before. That same notion of what happens as technologies adhere and build up to systems requires a way of responding that isn't just about saying, oh we should build steam engines. It actually is to say what would it mean to regulate that system? How would we think about the pieces? Who are the people that are gonna build it? How are we gonna manage it? What does all of that look like? And the reality is there are really interesting lessons you can mine from history here about how you were built in applied science now and about why you need it and about the fact that this isn't the first time we've been here, right? And I think those are interesting and instructive examples of what you wanna do is unravel where we have been. And you can start with the first wave of industrialization or frankly that first time when machinery gets complicated enough that it's not just about building it. And we all know these people. I sit in one of their colleges, we now call them engineers. But back in the 1700s, engineers didn't exist as a category. There were people who were wheelwrights, there were guys who understood what metal looked like under pressure, but there wasn't a category of engineering. In fact, the first school of engineering doesn't start until 1794 in Paris. Now, 1794 is an interesting time to start a school of engineering in Paris. It would be the Eccl Polytechnic. For those of you who didn't spend enough time in history classes in school, I will tell you why 1794 is interesting. It turns out it is remarkably close to the moment at which the French rebellion happened and they had killed their king. Now, what is the relationship between killing kings and engineering? Well, it's an interesting one. Once you decide to kill a king and destabilize the notion of the monarchy, destabilize the idea of absolute rights, and also undercut the role of the church, you find yourself in an interesting moment of going, basically, ah, shit, we don't have any experts. And effectively, I did just say that, and effectively, there was a moment in time where the response was to say, well, how do we train a group of people who will be able to build our world without reference to power by blood and power by institution, but in fact, power by knowledge? And so when the Eccl Polytechnic is launched in 1794, it brings together a bunch of thinkers, mostly philosophers, mathematicians, and early scientists, and says, all right, mostly gentlemen, there was one woman, mostly gentlemen, we need your help. We need to create a category of people who can help stabilize our empire. We need to build roads, we need to build bridges. We need to build buildings, we need to build systems, and we need people who can help do that. And so here you have the first school of engineering is in fact, at one level, a remarkably radical intervention into both the creation of knowledge, but also the creation of work, and it's state-based. The second one actually turns out to be in Constantinople, and is a school of naval engineering, which is kind of wonderful too. It takes a while for engineering to get out of Europe. Engineering in Britain doesn't become a disciplinary practice in a university until well into the late 19th, early 20th centuries. Before that, it's run as a series of civil societies, and a series of apprenticeships. If you wanted to be an engineer, you apprentice yourself to Smeaton or Maudsley or Brunel. You went to their companies, you learned their techniques, you basically spent the rest of your life working there. You were certified inside those organizations and sometimes by larger organizations. But there wasn't a notion of engineering as a qualification in that regard. Ironically, given the current state of the United States, the first schools of engineering in America owe all of their intellectual heft to the polytechnic tradition. The first school of engineering in America was actually the Army Corps of Engineers and was built on the basis of the French system. We won't tell them that, because I think it might upset them. But all of their engineering schools came out of a French notion of praxis and theory again. In Australia, the first school of engineering is at the University of Melbourne in 1865. It takes the ANU a little longer to get there. But we will go on to great and glorious accomplishment because Eleanor Huntington is gonna take us there, and that makes me happy. So, we have engineering. It's a radical intervention. Sometimes it's state-based. Frequently it is about bodies of knowledge being constituted, and it takes a little while for engineers to get produced as a discipline. Delightfully for those of us in the room who come from disciplines that would think this was funny, engineering claims it's interdisciplinary. Usually what this means is there are chemical engineers, mechanical engineers, civil engineers, and electrical engineers. I don't consider that to be quite interdisciplinary in the way I was raised. Nonetheless, you get a diversification of engineering and a certification of knowledge. And importantly here, because of the work that engineering does, frequently a need for those things to be certified at a state level through a series of effectively regulatory practices. So engineers become certified practitioners in a way that would be quite uncommon, frankly, in the social sciences. So flash forward, yeah, about 100 years. Give or take a little bit. You start to see the consequence of industrialization being not just the production of new sorts of technology, but the productions of new sorts of capital. And one of the things that happens as the Industrial Revolution scales up is that it produces money in a way that has never been produced before. And you have the emergence of people you would think of as industrialists. People who had tremendous amount of resources, they were building companies at a scale that hadn't been seen before. They were managing inventory and supply chains. They were managing workforces. And in 1881, a man named Joseph Wharton, who was a Philadelphia Quaker, whose mother had started Swarthmore, finds himself in a really interesting position insofar as that he is now a part owner of Bethlehem Steel. He has a whole lot of money, and his bookkeepers are just no longer managing to keep up. And so he takes 100,000 US dollars in 1881, just for reference, about 3.2 million Australian dollars in today's dollars, goes to the president of the University of Pennsylvania and says, excuse me, could you build me a better bookkeeper? And to the credit of the president of the University of Pennsylvania, he went, I think it's not bookkeeping you want. I think actually this may be more complicated than that. And so they take a year to go work out what would it mean to build a better bookkeeper. And at the end of that year, it becomes very clear to them that in fact what they are talking about is what we would now know is business and management science. But they do that by pulling together people out of economics, out of the beginning of psychology in the US, out of philosophy, the law, English literature, because it turned out they believed it would be good if you had read well, as well as some pieces of what Americans would call civics, so they thought it would be good if you would understand history. And they built a curriculum. We know that place now is the Wharton School of Business. It launched in about 1882. Its standard mission at that point was to bring fully rounded men, I hasten to add, of course, fully rounded men to business. But the idea was that they would know how to not just do accounting and bookkeeping, but also how to think about the circulation of money. And out of that first school of business, a whole lot of terms that are now terms that aren't terms of art, but things we all refer to were born. The first GDP measures came out of there, the very first theorizing of how to do marketing, the very first marketing departments, some of the first theorizing of how you do labor relationships were all born out of this attempt to think about a better bookkeeper. Lots of schools of business followed. Harvard, Berkeley, Australia are a little late to this one. We don't get our first school of business really by any standards until about 1955. And again, it's the University of Melbourne. It's not a good story, Brian. We should work on that. So you get business, right? And here it's about, effectively, industry coming to universities and saying, excuse me, I think we need your help. Like, can you help us think this through differently, right? We need a different set of skills. And it is clearly, by this point, an interdisciplinary exercise that ends up in a new form of disciplinary knowledge. Last but by no means least in the galloping tour of applied sciences is of course computer science, which is in some ways the most recent of the attempts to tame and manage machinery through intellectual heft. Now, of course, computers, complicated things. They turn up en masse by the late 1940s, initially managed by mathematicians and electrical engineers. It becomes clear by the 1950s there's more going on here. And there are multiple attempts to create a discipline around computing science. It isn't called that initially. The first attempts are funded and freighted out of an American think tank called the Macy Foundation with the Macy Conferences from 1946 to 1953 that were ironically and interestingly enough spearheaded by Margaret Mead and Gregory Bateson, so two American anthropologists, whose job was to bring together mathematicians and philosophers, including people like John von Neumann and Norbert Wiener, to think about cybernetics. So in this attempt, the first attempt to think about the human compute interface and also the human compute system. Cybernetics doesn't unfold the way anyone intended and it doesn't really become established as a discipline. And in fact, it fades out by the late 1950s, only in some ways to have the same intellectual agenda repackaged and renamed in 1956 at Dartmouth with some National Science Foundation funding, which is when artificial intelligence is born as both a phrase and an intellectual agenda. So in 1956, a guy named McCarthy brings together a bunch of again, mathematicians and philosophers and behavioral scientists at Dartmouth and says, okay, we need to think about this whole thing, this computing thing seems to be getting faster and faster and faster. The story we tell about computers is they're like brains. If they're really like brains, we could make them like people go. And for about 12 weeks, that conversation proceeded over what sounds like a very long summer in New Hampshire and a lot of kind of moving pieces. And at the end, there was something that became known as artificial intelligence with seven pillars of research that have gone through multiple winters and multiple dark periods since then to reemerge in some ways as the technology finally got good enough in the last five years. Immediately after that, however, people started to say, okay, AI is a specialty, but there should be a bigger name for this whole space. And computer science as a notion starts to emerge. Now there were practitioners before that, frankly again in Australia. We had the second or the fourth, depending on who you listen to, second or the fourth stored memory computer on the planet. I was a thing called Cicero. It was built at the University of Sydney. You'll be happy to know. Although the University of Melbourne got it afterwards. Sorry. I was built at the University of Sydney in 1948, 1949. It took an extra nine months to be brought into production because there are electrical shortages in the post-war period in Sydney. So although the computer existed, we couldn't turn it on. It's probably not the best story ever about Australia and know-how. But we were building computers early and working out how to program them. And those conversations proceed here, but mostly they take place in the United States. And specifically at my alma mater at Stanford University, where a man named George Forsythe, who is Diane Forsythe, the anthropologist's father, George Forsythe brought together a bunch of people to build a computer science curriculum. And he did again in a way that those of us who were social scientists or humanities people in the room would find kind of staggering. He pulled together about 20 people from around the United States who were calling themselves computer scientists and said, right, we need a curriculum. And everyone went, okay. And they wrote one. It's about 10 pages long. They circulated it a few times. And then in 1968 at the Association for Computer Machinery, they released it to everyone and said, right, here's the curriculum for computer science. And everyone went, that's great. We'll go home and implement it. If you're an anthropologist in the room, you are trying to imagine what it would be to have a curriculum for computer science or anthropology that someone gave you when you were going to go home and implement it. Same with English literature. Same with philosophy. We would not do this, but I love the fact that computer science did. It's great. Makes it much easier to build an applied science in the 21st century knowing that one of the models is here's your curriculum now implemented. So computer science is interestingly in this case also a political activity for one of a better way of framing this. Part of the reason that Stanford and Purdue in particular were the places that this got built has to do with various government agencies coming to universities and saying, we're a little concerned that computing is being built by companies. At that point, programs were company specific. Fortran at IBM, Grace Hopper's early program, Flomatic at the Rand Institute, those were company owned specific software programs. And for the American government and a number of other agencies, there was a concern that you didn't want all of that knowledge held hostage inside a company. And then in fact, what you needed to do was find a way of generalizing that knowledge, making the principles available and creating a mechanism by which it might not be proprietal knowledge. So, three very different notions about applied sciences, right? One that is government driven and is in some ways a radical intervention into nation building. One that is industry driven as a request of the academy help us build a better bookkeeper. And one that is about government's requests of universities to help them find an open space. Very different impulses, very different results, but all the same underlying notion. How do we combine theory and practice? How do we find a way of making a new discipline and a new set of disciplinary knowledge and a new intellectual agenda? Which gets us to the moment I think we're in now, right? Which is, well, what's the thing we're managing? If for that first wave of the industrial revolution it was steam engines and other forms of large machinery, if that second wave is about capital and mass production and things like that and the third one is about computers, what is this fourth one? So Schwab back at the World Economic Forum when he put out the fourth wave of industrialization says it's these cyber physical systems. I think it's more about data and the circulation of data, its creation, its commodification and the ways in which it moves and about the ways in which it is already in our lives. Most of you encounter forms of this system every day. You don't need to have a self-driving car to be encountering data. You need to log on to the internet. If you have opened the web and you have gone to Google or Netflix or Amazon or on your mobile phone, you have gone to Uber or Tinder or whatever it is that you're doing on your mobile phone at this moment in time, Twitter maybe. You've encountered data and an algorithm and that algorithm is making decisions about what data you see, making decisions about our taste, our media content, who we find desirable. And those systems are already with us. Now scale that up. Those are systems we deliberately and explicitly encounter. Now imagine the ones we can't see, whether it's about buildings that are smart about who's in them, whether it's about real time capacity to calculate the students in your building and what they're doing, whether it is about how cars will function on the road. Those all still seem relatively okay, except we already know that the first court cases that are proceeding in the United States ask some very different questions. Two court cases in particular that I think are worthy of note. One has to do with a person whose fitness tracker, so their wrist worn quantified self object. So think Fitbit, Jawbone, Nike Fuel, whatever your preference is, your Apple Watch. But this is not an Apple Watch. I'm gonna be happy to hear, Jane. This is a different quantified self object. This particular individual has been charged with a crime and his quantified self object is being used to demonstrate that he perjured himself on the stand. Because he said he was at home in bed asleep and the device says he was somewhere else and clearly not sleeping. Now, there's a number of troubling aspects to that. It turns out the information your body generates, which we would otherwise think of as being quite intimate is at least at law in the United States and not owned by you. It is secondarily the case in the United States that that same information that is not owned by you can be requested by a third party about you and if there is the appropriate legal jurisdiction will be granted. So I'm willing to bet if you were a lawyer you might think to yourself, huh, that should almost be covered under spousal privilege. I mean it's awfully close to my body and myself and yet it's now being held in a third place. And then think about all the ways you have generated data over the years and imagine how you feel about the fact that that data may now circulate about you to others and we haven't actually thought through what that means. Similar court case that's unfolding in the United States has to do with managed healthcare service providers, least one of which on the east coast of the US has just purchased all of the credit card data of all of their clients. So they have married your credit card data with your healthcare data and they are now thinking about variable rates on the basis of your credit card purchases. Think of a number of people who will immediately go to cash under those circumstances and some other people who didn't realize that they ate at McDonald's quite that often. Or frankly it turns out that there is a direct correlation between IKEA flat pack furniture purchases and emergency room admissions. So if you often thought that Alan Key meant you harm, you are correct and we can prove it. Nonetheless that creates an interesting problem, right? So if that's just the world of data we're in now think about what it looks like as it expands out and as that data isn't necessarily explicitly knowingly collected by you about you but is collected in other tacit ways. And then imagine all of that wraps around everything from our infrastructure to the tools that any agency uses when they engage with you. If you were for instance today to decide you were going to build a salary tool and you fed into it all the data about salaries from I don't know, a reputable Australian source like the ABS and you were then to determine how salary should be paid on the basis of that. For every woman in the room I have bad news for you. You'll be paid less than the man you were sitting near or in the same row as because it turns out the historic data suggests that women are paid less than men and the way you would build that tool is on historic data. So now imagine how it is that we build a world of data and based on data where all of that data is always retrospective. So it's about what has been not what will be and think about where you might need to intervene in order to change that. And how we go about managing all of that. My supposition here is that that may be beyond where computer science and engineering has been historically. And then in fact we may be at this moment where we need to think about a different applied science that raises a new set of questions and gives us a new set of tools for how we critically interrogate that space. It isn't about replacing computer science because we still need it. It isn't about replacing engineering because we need that too. It's about saying this new set of technologies requires a different set of knowledge. And so what do I think that knowledge looks like? Well I think it frames around three distinct questions. And I know Brian doesn't like these questions terribly much but that's okay because we're gonna get used to them together collectively. And the reason I've picked these questions is because the thing about all those other applied sciences, the names emerged later. No one knew what they were called when they started. The answers became clear over time. And in most of them there were clear questions that started the conversation. And I think there are three questions here that are critical. And they're questions that come both out of a technical, preoccupation and also out of ones from other disciplines. First set of questions are around the notion of autonomy. What does it mean to talk about something as being autonomous? Our language defeats us. We talk about self-driving cars and then we say the car will stop itself. Okay, so what's wrong with that sentence? The car now has a self. And I don't know about you and how you feel about your car but the notion that my car has a self is a really interesting proposition and possibly not the one we mean. And while that may sound like it's a linguistic problem it's actually got really interesting implications. So how do we think about the notion of autonomy? What does it mean as human beings to imagine there is something else near us that is autonomous? Way back in 1949, Turing, Alan Turing, computer scientist, though in those days would have been a mathematician, wrote an extraordinary paper about artificial thinking and thinking machines. And in it he says, human's greatest fear is about the notion that there is something as intelligent as them working somewhere else. He says this fear is even worse in smart people because they have more to lose. Elon Musk, I'm looking at you. Hello. So he argues that the biggest challenge about autonomy is that as human beings we don't know how to grant that to something else. We don't know how to say that something else could be like us. And all of our language here is a challenge. Certainly in the Western tradition we like to be at the top of the pyramid and then there are notions about things that are autonomous around us. Even the language is a problem here again. What does it mean to be autonomous? Autonomous in what possible schema? And oh, by the way, let's imagine that comes out of a very particular intellectual tradition. I have a colleague of mine who's a futurist in Sydney and we were joking the other day about what it would mean to imagine a Buddhist AI, an artificial intelligence that was co-emergent on the basis that the notion of self and other is a very particular intellectual frame that isn't shared globally. So how do we think about autonomy here? And oh, by the way, that has implications for how we build these systems technically. I'm willing to bet at least a few of us in the room have had conversations about self-driving cars with other people and you realize that that language has already shaped the way people imagine it. I had a conversation with a colleague of mine in South Australia who told me that there would be trucks that would self-drive from Adelaide to Alice Springs. And I had to cheerfully say they will stop at Pimba. And he said, why is that? I said, because there is no 4G network after Pimba. And he said, but they're self-driving. And I'm like, yes, with a network. And he was like, really? I'm like, yes. And oh, by the way, have you seen the road? For cars to work really well, you need to have lines painted on either side of the road. You need to have clear architecture. He's like, but it happens in Germany. I'm like, have you seen the roads in Germany? They're nicer than ours. And they have better connectivity. Shockers. So we have all these challenges, right? Of what does it mean to think about things being autonomous? Is both a technical set of questions, but it's also a public policy set of questions, and it's a regulatory set of questions. If they're autonomous, where does the liability lie? If they're autonomous, what is our responsibility? And frankly, there's the human question of if all these systems are autonomous, what's it gonna feel like for us? The first time my team and I did fieldwork looking at how humans felt as pieces of their home started to have capacity to do things without the homeowner involved. And this particular set of instances, smart refrigerators talking to the electrical company. The first thing people said to us is, wait, are you telling me my fridge is gossiping about me behind my back? Kind of like, yeah, sort of. And then you realize this system is much more complicated than it appears. So we have questions about autonomy. There's a second set of questions about agency. So if we imagine that we are building technical systems that have some capacity to do things without human intervention, it's the closest way I can get to regulating that without saying the word self. So we have autonomous systems that are doing stuff. How far are we willing to let that happen? What is the degree of effectively self-empowerment we're gonna allow for these objects? How much agency do they have to make decisions on their own? Even that language is hard, right? But is that car allowed to go to the edge of the ACT? Is it allowed to go into New South Wales if it's regulated in the ACT? Does it get to go into New South Wales? Do New South Wales cars get to come into the ACT? At what point do you, as a human, need to be involved in the system? How do we think about where the system is allowed to make decisions and what is the nature of those decisions? Are some of those decisions contingent? And then, oh, by the way, there will be a moment where the whole lot of these systems where they're proceeding without you at all. So how much room do they have to do that? And what are the mechanisms by which they're trained? I have colleagues in the United States who are building a really early test of an intelligent travel system. It's tiny, little agency, little, you know, intelligent bot, basically, that will navigate the internet for you and get you better plane tickets. So they've trained this intelligent agent in three different ways to see the difference. So one of these intelligent agents was trained using chess as its strategic mechanism for thinking about negotiating a plane ticket for you. One was trained using Go and one was trained using a single person shooter. Now, guess which one got you the best plane ticket price? Hmm, single person shooter. Guess which agent no one wanted to deal with again? Single person shooter. So we have this really interesting challenge, right? If we build these systems and they are agentful and they are acting on our behalf, how do we feel about that? Do we know what they are asking for on our behalf? How are we training them? What are the mechanisms? Again, and how do we think about all of that playing out? And then last but by no means least, because it turns out to be in some ways the most important question here, and I have to thank my colleague, Rob Hanson at Syro for this particular phrase, because he's right, is that the last question here is how will it be safe? And not just what does it mean to think about a self-driving car that doesn't kill people? That's a good first step. But it's also about how do we think about trust, safety, security, risk, liability? And we see these conversations unfolding globally. The German government released a remarkable document about six weeks ago, laying out their framework for semi-autonomous and autonomous vehicles inside Germany. It reads as a very different document than the similar ones in the United States or Australia in terms of where the locus of liability is, in terms of notions about safety, in terms of provisioning around risk. So these conversations are going to unfold, and some of them are public policy and regulation, but some of them are more than that, about how do we want the world to feel? How do we want to think about not just other systems safe, but are we safe around them? Because ultimately, we're also building a world that will be all around us. So how does it feel to think through all of those things? And who are we going to trust to do that work? One of the complicated pieces here is that for a very long time, we were dealing with objects. And the better or worse, we mostly know how to critically interrogate a thing. You can look at a car and go, that car looks like it's safer than this car, that car is of a vintage where it probably has airbags and seat belts, this one does not. This car is clearly configured to be safer. We can physically articulate the object and know what it means. It is a very different proposition to try and do that with an algorithm. Most of our regulators don't know how. Most of the people who work inside the companies that build them don't always know how to take them apart. Many of what sits inside these algorithms are corporate confidential or industry trade secrets. And yet, those algorithms are coming and they are bringing with them ideas about the way the world should work. And as Australians, we have policed our borders in complicated and in some ways, noxious ways for a very long time about what gets to come across those borders. Some of us in the room are old enough to remember various forms of censorship of literature, of what books were appropriate to come to Australia and what weren't. We still enact really tough biosecurity rules about what kind of plants and animals can come into Australia. We do similar things about foods and medicines. We have not yet thought about what it might mean to apply the same test to some of these technologies when what is looking inside of them may be values that are as important to us as our biosecurity and our safety. You know, I've been gone a long time but there are things about Australia that remain remarkably stable. One of them is a long-standing cultural tradition and commitment to ideas about fairness and equity and social justice. Not every algorithm that's coming here embodies those values and there are questions we might want to ask and how we would think about doing that as a form of border security is a radical proposition but in some ways is part of our long-standing history of how we think about what comes to Australia and what doesn't. So three areas of research. One around ideas about autonomy, one around ideas about agency and one around ideas about assurance. Every single one of those areas requires an incredibly broad set of conversations. Requires bringing people together from the arts, the humanities, the social sciences, from law, from engineering, from computer science and putting all of it together. And so hopefully that's what my little institute is gonna do. So I got asked today by someone, okay, well that's nice. What are you gonna do exactly? I think first step, buy shoes, drink, sleep. Once I've got past all of that, I wanna take the best of what I learned in Silicon Valley about how you do something a little bit more quickly than usual and a little bit more aggressively. So I know that the Canberra Times said I had a 10 year agenda, five. Gonna get it done in five. 10 feels like an impossibly long amount of time. And so here I wanna borrow on lean startup methodology as well as the thing that feels very dangerous to say in Australia which is rapid turn failing. I know, novel concept. Fail early, fail off and move on. And actually go through the process of saying, well I think those are the three questions and when I've tested them with my various colleagues back in the US both in the academy and the industry side they all say yes, but I wanna go see if it's true. And I need to go find out if there are people from other disciplines who want to be in those conversations. Because frankly my suspicion is there's a bunch of philosophers out there talking about autonomy who have no idea that they're part of a conversation about the 21st century, but I'd like them to have that conversation with us. So there's a piece that says how do you go find the right people to have those conversations with? How do you convince them that Canberra is a really good place to come to have those conversations? Of course that's easy because Canberra's a lovely place and everyone should come here to have those conversations with us. Yes, Canberra's a lovely place. Really, you're supposed to agree. Canberra's a lovely place, everyone should come here to have those conversations, exactly, yay. So you find the right stakeholders, you find people who are interested in this intellectual space, you bring them here. I think in some ways the biggest challenge is actually about culture, which is that for a very long time, academic culture has been not necessarily focused on building things bigger than oneself and not necessarily in one's discipline and the request here is in some ways that this is an act of building something new, not reproducing the old, and about taking all those new disciplines, or taking all those disciplines to build a new thing. And frankly, while the Institute may start as an interdisciplinary thing, when it's done what it will have done is built a new discipline. And finding people who want to participate in that way isn't easy. So step one, find co-interlocutors and fellow travelers. Step two, verify and test those questions. Step three, build a new body of knowledge. See how it gets harder, but I make it sound so easy. Build a new body of knowledge and a new set of questions. Train up a generation of thinkers and doers and makers and release them into the world. And by my reckoning I get that done in 2022 and then I can go do something else. So in order to make all of that happen, there's a couple of key things that are necessary, one is that there will be people in this room, there will be people out there on the internet, there will be people who will find this later, and I need everyone's help. This is not something I'm prepared to do, or nor do I think I should do alone. Like I think it's crazy to try and do it by yourself. Most of the best applied disciplines were not done as single human beings, so they were done at single universities, which is comforting. So my request, I guess, before I stop talking, because Ryan made me promise I would, is that I need everyone's help. I firmly believe this is a thing we can do. I firmly believe we have a limited window of opportunity in which to do it. And I firmly believe Australia is the place to do it, because whilst it is tempting to think this conversation should happen in the valley, the thing I know about the valley is it has its own center of gravity and this would never live there. So part of the best of the tyranny of distance is that it makes just a little bit of room to have a slightly different conversation. So I hope you'll all have that conversation with me. Thank you. Thank you, Genevieve, for sharing your vision with us tonight. Next I'm gonna have Jillian Bradford. I was gonna say a few words before joining Professor Bell in conversation. Jillian, as already indicated, is the ABC News Bureau Chief in Parliament House, managing the operation across radio, digital, and television. She's worked in the Canberra Bureau on and off for the last 15 years, and has been a reporter on AM and PM, TV News, and a producer on 730, and the documentary series, The Howard Years. So please welcome Jillian Bradford to the stage. Vice Chancellor, may I say what a pleasure it is to be here in the company of the team of underachievers that you're continuing to assemble here at the ANU. We are so fortunate, so fortunate to have this world-class institution here in the nation's capital. And I have just rolled down from the hill this afternoon after another question time, which continues to discombobulate all of us. And I think there is a large degree of comfort that you are here focusing on the real challenges facing the nation and the globe. I am delighted on behalf of the ABC that Professor Bell has agreed to be our 2017 Boyer Lecturer. Each year since 1959, the ABC has sparked national discussion about critical ideas with the Boyer Lecture series. The roll call of names is just extraordinary. From Segustaf Nossel, Dame Roma Mitchell, Bob Hawke, Justice Michael Kirby, Manning Clark, Dame Quentin Bryce. And in October, Professor Bell will continue this proud tradition by interrogating what it means to be human and Australian in a digital world. Episode one will be available on the 3rd of October, and you can subscribe to that anywhere you get your podcast. And if you can't get enough of the good Professor after today, you can also jet up to Sydney, where she will be giving another address in the ABC Studios on the 21st of October. And if you can't attend the lectures, for the first time, I think the ABC is calling for some input into the Boyer Lectures. So all you need to do is go to the Voice Recording app on your smartphone, which this will be a test of your technological prowess. Tap the red button to record, share your hopes and fears about technology, then email it to the Boyer Lectures. Do that by the 9th of October. So Professor Bell, you're going to have to come up on stage with me now where I get to interrogate you. I feel like we should wander around like Donald and Hillary during the election debate, and we can sort of manoeuvre around. I don't want to know which one I get to do. Can I be Hillary? You can definitely be Hillary. Thank you, good. Now, being a journalist, I love it when academics generate headlines. So you liked me last week. I liked you last week. So let me give your own words back to you. Genevieve Bell has described the focus on science, technology, engineering, and math skills as quote, one of the worst things we have done in the last decade. Do you tell? That was a good moment for me. I sat on a panel at an AFR event. I went into the sound studio. I came out three hours later discovering that I hated STEM, which was news to me, frankly, as someone who spent 20 years working in a tech company and is now a professor of engineering and computer science, the Florence Violet Mackenzie chair, in fact. Listen, I mean, I think where I was trying to go with that thought wasn't about STEM, it was about the lack of balance we have and how we think about what we're funding in universities. For me, as someone who spent 20 years in the tech field, I'm critically aware that one of the biggest set of challenges we have is about the conversations we're having and about who isn't in the room to have those conversations. So for me, it isn't about saying defund STEM, but it's about saying don't fund STEM at the expense of all the other disciplines, because frankly, those other disciplines are also of critical importance to our future. Now, you've come into the nation's capital, and probably not realizing that your colleagues have spent a long time trying to convince the politicians up on the hill to take note of STEM and build science into their thinking. If you had their attention, albeit very briefly, because their attention spans are quite short, but what would it be that you are saying to them about what they need to do in terms of those public policy settings? So I'm sure I should start by saying I hold an American green card and a British passport in addition to my Australian passport, so you may not want to listen to me. Listen, I think the correct answer there is one that says, we absolutely need to be investing in STEM, but we can't do it in some ways at the expense of the rest of the disciplines. It turns out that many of the critical breakthroughs that are going to need to be had over the next decade are about the intersections of science, technology, engineering, and math with things like economics and public policy and law. And frankly, other things that we're not so good at thinking about like philosophy and the arts, because those are also both disciplines that generate knowledge and ideas about what the world is about, but they're also ways that provide a critical voice onto what technology is doing. And frankly, I don't think it's enough just to say we need to be a STEM nation without saying to what end and what is the Australia we want to build. It's not just about technology, it's about ideas. So for me, I think you have to sort of balance those things. Now, there's obviously a number of pieces that have been really good starting points. The national innovation and science agenda is an excellent first place of saying, government has a role in thinking about how you create an environment in which innovation can happen. And we know there's some really wonky kind of things that need to happen there around tax incentive structures around ideas about how IP can be managed, ideas about how ideas can move between here and elsewhere. There are pieces of public policy that need to get built. And there are ways of thinking about how do we create a better, well, technological infrastructure for which that to happen. I'm not gonna be the first person in Australia to suggest that our national technological infrastructure agree it isn't perhaps what it should be. So when you're out on the highway between Adelaide and Alice Springs. I was thinking when I was in O'Connor where I get ADSL. So I've come back from America to the 1990s. All I'm missing is a modem noise. So there is a little piece there where I kind of go, oh, it's hard to think about how to drive technological transformation when we don't have all the working pieces. And I know it's unfashionable to say that, but I think there's a piece that the reality is you can only do so much by bootstrapping ideas. Let's get to something a lot of people are terrified of when it comes to technology and that is jobs. The fear that robots will take over our jobs and that is in the not too distant future. What do you say to that? Like part of that fear is not unreasonable. Technological transformations inevitably change economic realities. Now, I think some of the ways we talk about that are probably a little more complicated. There's a reason why certainly in the United States the trucking industry in particular has been very vulnerable to conversations about transformation of technological infrastructure because it turns out that driving a truck is one of the largest employee jobs in the United States. And it's one of the very first places we will see significant change and a cadre of workers for whom the notion of reskilling is difficult. So you've got multiple pieces of that. Technology will ultimately change economic factors. It always has. Now, does that mean it will proceed the way it has in the past? No. Are there pieces of that conversation that are unnecessarily hyperbolic? Absolutely. Are there places where we aren't good as technologists, as leaders, as journalists in both working out why those fears happened but also about what it means to think differently about it would require having a more subtle and nuanced conversation than anyone wants to have. It's much easier to say AI, existential threat, we're all gonna die, robots are taking your jobs than it is to say the reason we think AI as an existential threat is because we have 250 years of Western intellectual tradition sparked by the Gollum and the Frankenstein myth that make us believe that things that aren't us are dangerous. That's already not a sound bite. And so what about the notion then if jobs are going to disappear that we need to put the tax on robots to ensure that basic wage that's gonna stop us all turning to crime and having revolution? It's already happening in some places. So there are already jurisdictions that are thinking both about how do you limit certain kinds of skill replacement? How do you think differently about the tax structure? I can also tell you that for better or for worse there are certain kinds of jobs that will continue to be done by humans because the cost of building technology to replace humans is actually quite high. So this is an interesting place where some of the consequences of certain sorts of artificial intelligence are exactly to Turing's point. They replace certain kinds of white collar jobs before they replace blue collar jobs. It's easier to build an AI tool that will do paralegal work than it is to build a robot that will fold laundry. Turns out the paralegal replacement is much easier to build than the robot that fold the laundry. And so unfortunately, there are going to be the same kind of inequities we have seen proceed in the past where this next technological wave cleaves along in some ways the same lines that it always has around gender and class and race and location. And are our universities skilled up enough because of course the lag between what people are being taught at university and where those jobs are in the pipeline. How are we on that trajectory of people doing degrees that are going to be absolutely no use to them when they graduate? Sorry. I hope not. It'd be a terrible thing to say yes to. Listen, I think there's some interesting ways we could look at the degree base we have in Australia. We have an incredible deficit of engineers. We are not producing enough engineers out of any university in Australia to meet the needs in that space. And we probably have some surpluses in other places. So there's that piece. I also, I've been really interested in some of the conversations I've had in the States recently with my colleagues again in Silicon Valley about what they're looking for in their next generation and cohort of employees. And it's been really interesting to hear them reflect on what they're looking for, which is not just about a knowledge set but is also about a particular set of quite how to describe it. Not attitude or orientation, although they frequently talk about the fact that it would be good to have the expectations you employ someone from a university. They have a skill set inside their discipline. But there's also an increasing sense that my colleagues are looking for people who know how to do a set of other things. Work on teams. Work on teams with people who don't share their disciplinary background. Be able to manage through sort of ambiguity, transformation and high change. Be able to communicate their ideas effectively. And that's an interesting constellation of ways of being that aren't just about knowledge formation. And then frankly, the other piece of the conversation I've had with a number of my colleagues stateside has been about how we build in both an ethical and critical thinking component to most of our disciplinary practices. So it was interesting to see that Harvard and MIT have added modules around ethics to both their CS and engineering programs. And part of what I've been hearing from my senior colleagues in the Valley is that they have employed computer scientists and engineers, but those same people don't know how to start asking questions about what they're building. That isn't about the technology itself, but about is that decision we just made there in that line of code actually a decision that has human scale impact and do we need to surface that conversation? I mean, the moment when Google realized that building self-driving cars wasn't just a technical challenge, it was an ethical one, was the moment at which not only did they call the philosophy department at Stanford and ask about the trolley problem, but they also went, hmm, maybe we need an ethicist, maybe we need two, and maybe we need to start having some different conversations about the systems we're building inside the company. And that suggests to me that whatever we are doing inside Australian universities, the challenge isn't just about the knowledge, it's about the ways we think that knowledge should be transmitted and about whether we are also training people to be good critical thinkers. So tell us about the driverless car because it is the sexy example of where we're headed and does fit in with your 3A institute, all those questions apply. So what is it that you picture in a decade, two decades? Well, so I think some of the most interesting things about the self-driving car example are about the things we haven't stopped to kind of contemplate. I mean, I read, as I'm sure some of us in the room did, with some interest when a major European car company bought the self-driving car here to Australia and was testing it just outside of Canberra, and it's been a lot of time hitting kangaroos. Not so good for the kangaroos or the car or the engineers. And the conversation became, well, how did you not know that there were roadside hazards? And the engineers said, we built a cutout. So we basically, we built a piece of code to simulate things that might come at you from the side of the road. And we modeled it on caribou. You're like, yeah, okay. So caribou gets you to cows, horses, goats, elk, moose, deer. Good, maybe wombats. I'm quite sure about that, probably not. But it really didn't get you to roos because the bouncy, bouncy bit. And the problem is, of course, when the roo goes up like that, the car sees the horizon and thinks there is nothing there. And then the roo reappears and not good. Again, just generally not good. Turns out riding an algorithm for kangaroo movement was a whole new problem that no one had encountered before. All of which sounds incredibly silly until you realize that all of these technical solutions start somewhere and that embedded inside those technical solutions is a country. In this case, well, somewhere where there's caribou. But imagine any other place where you've built in a worldview. I worked on technology a long time ago that assumed that the average household size was the average footprint of an American home, which meant that technology didn't work well in Asia, just for instance. So self-driving cars, they already have a country. Before we've even got to what it will be like to have them on the roads, they already have a built-in worldview about where they should be and what should be around them. And the answer is not bouncy bouncy kangaroos, but happy caribou. So we've already got that sort of challenge, right? You have a second order of challenges about how you have built in driver assistive technology where the driver is not paying attention. So one of the things about human beings is it takes us a while to get our heads back in the game. In not a long time, we're not talking 10 minutes, but we are talking more than five seconds. And so if you have cars that require humans to come back into control, it's actually very hard to modulate that properly. So you've got that problem. You have the sort of other set of problems that say actually self-driving cars are not a contained technology. They require a network. They require a reasonable flow of data. They require a well demarcated road transportation system, all of which is a little tricky to imagine. And then my favorite kind of piece of that is much like other technologies, we're not gonna get to a standard as quickly as you might hope. So, fascinatingly, the accidents that are occurring with self-driving cars at the moment are two kinds. There's one where, happened in Palo Alto not that long ago, where cars from two different companies, self-driving, because it turned out two different technical solutions to the same problem, the same way if you've ever had an Apple iPhone versus an Android or a Microsoft operating system, you know they get to the same solution ultimately, but they engineer it and it feels completely different. So now imagine cars are doing the same thing. And then the second one, which is equally interesting, has to do with how you imagine what engineers lovingly call the legacy problem, which you might think of as us. So, if you have cars on the road, some of which are semi-autonomous or autonomous and the rest of the people are on the road are us, it turns out the way we drive and the way semi-autonomous vehicles drive has to do with an ideals versus practice. So, the semi-autonomous cars are taught the rules, like the rules, capital T, capital R, from the rule book with a B, which means they stop at stop signs, like fully stop at stop signs, like come to a complete stop at stop signs, unlike most of us in the room who do what they lovingly call in California, and I'm sure we call it here, to a California rolling stop, or you can go, yeah, I'm just slowing down, but I've not come to a complete stop. So, many of the accidents that have happened with self-driving cars of late in Arizona and California have to do with humans running into the cars because the cars aren't behaving like other humans. And so, there is a bit that says how you build out a world where we still have legacy, i.e. us, and all that technology is really pretty complicated. And then, of course, last but by no means least, there are all the challenges about how do you architect for accidents and catastrophes? So, you build a self-driving car, the last thing you wanna do is have it hit someone or something, but how do you start to model that? Well, some of the ways that's been done up until now is to make basically information trees to kind of go through various situations. So, the problem with that is that if the choice is hit the person, don't hit the person, this is an easy decision, don't hit the person. If the choice is don't hit the person, but in not hitting the person, you damage people inside the car, that's more tricky. If it's a trade-off between don't hit the person, don't damage the people inside the car, and don't hit that very expensive building, on the one hand, you would like to imagine we're privileging human life. On the other hand, you can imagine, depending on who is underwriting the cost of that car, they may have a different insurance mindset about where the level of risk and liability lies such that you can well imagine a world in which you might hit a person because hitting that person, and I mean this as a scenario, where hitting the person is cheaper than hitting the building, or you could decide who are the people you save inside the car. Most of us who've driven at some point, having inevitably had that moment where you put your hand out to stop the person next to you, or this hand if I was driving in the correct place in Australia, put this hand out to stop the person next to you moving forward because we all try to save someone. If the car has to make a decision about who it saves, who's it saving? The driver, the kids, the old people, because all the knowledge of the world is in their heads, the reproductive woman, someone is making those decisions. And those are not technical decisions. Those are decisions about ethics, morality, the trade-off between a commodified world of insurance and a human world. Which is going to be the first country that gets in this game the quickest and has a critical mass on the roads in the short space of time. Well, what's interesting is that it actually looks like some of the first places to do this are going to be more around transportation of goods and transportation of humans. So trucking is an easier place to think about because you can have roads that are just targeted to trucks, which is a little easier. And most of the more interesting experiments that are happening in Europe. But I think it's worth remembering that people have thought about other ways of using AI to frame transportation that predate semi-autonomous vehicles. When I was think Singapore's a really interesting example here, they went to a system more than 15 years ago where they tagged every single car on the road. Those tags are live. You can now do a whole of Singapore traffic visualization and thus do things like determine in real time what the congestion charge to downtown should be, as well as navigate people to the nearest parking structure and time the appearance of public transportation to make a holistic system that says not all ideas of AI and transportation need to revolve around making vehicles autonomous. They may be about making transportation systems smarter. And frankly, there is a piece where I think if we spent half as much time talking about making smarter transportation systems as we spent talking about autonomous vehicles, that wouldn't be a bad trade-off. And finally, before I get a dirty look from the Vice-Chancellor on time, I'm not sure where I'm running, but can Australia be in this game? Doesn't the great weight of data mining in the US, China, that we can't even be in this? It's an interesting thing. When I left Australia 30 years ago, we would have thought that was the best reason to give it a go and I'm a child of the we should just have a go and see what happens. I don't quite know what happened in the 30 years I was gone that you'd even ask that question. But I think the answer is, of course we can. I mean, I think the interesting thing about this entire piece of the puzzle is that it doesn't necessitate building hardware. So where Australia has been at a deficit in the past in technological waves is that we don't have enough scale to build our own hardware. And you can see that across multiple kind of things we have tried. We shouldn't have a big enough market to do it. But here this is about intellectual horsepower and we have plenty of that. And we have strong universities with long-standing traditions in many of the disciplines that matter. We have leading light thinkers in this country. We have things like data 61 that are already in that conversation. So in some ways I think this is one where we could be in it. And frankly I think what being in it is gonna look like is different ways. So there's clearly the stuff I'm doing. But Australia is also an incredibly important node in the map of the digital humanities. So another kind of way of tackling all of this stuff and another in some ways emergent applied science for thinking about data. And we have lots of people that are working on that here where the thing about data sets is they move. So unlike hardware, we're getting them around was actually quite tricky. Data sets are a little easier. I mean, you know, ADSL notwithstanding. There is a bit where you can move data and manage data in ways that make it easier to imagine the experiments. And I also think we have a long tradition of building interesting things here. And I really want this to be one of them. And I firmly believe that some of the things that get built into this world will matter to us as Australians that if we don't kind of get our head in the right game here, we're gonna end up awash with technologies that don't encode our values. And I'd like to be part of having a different reality emerge. Professor Bell, we look forward to you giving it a red hot crack. Me too. And to the Boyer Lectures coming up soon. So, thank you. Thank you. Yay. I think we're thinking. Thank you, Genevieve and Gillian for that wonderful conversation. To wrap things up, I'd like to invite Data61 CEO, Adrian Turner, to the stage to give a vote of thanks on behalf of Data61 and also CSIRO, Adrian. Wow. So, before I start, I'd like to acknowledge and celebrate the first Australians on whose traditional lands we meet and pay our respects to the elders of the Ngunnawal people past and present. Thank you, Genevieve. Thank you, Brian. Margaret, Eleanor. And I'd also like to thank Bob Williamson who had the original, who I guess, and this has been a bit of a theme, asked the right question right up front which is should we be talking with Genevieve Bell to help convince her to come back to Australia? So, Bob, thank you. There is absolutely no doubt that we are at a critical juncture as a country right now, not just because of the things that Genevieve spoke about. And there's also no doubt that Australia's future will be underpinned by deep science and technology. In the face of that, we need to be courageous to ask the right questions and to challenge existing norms and existing thinking. We need to push the boundaries of knowledge and, above all, we need to back ourselves in core places to lead. I think Genevieve said it really well. We have the intellectual horsepower. We have so much, and I can tell you, having spent 18 years outside of the country looking in, all of the ingredients are here for us to lead in core areas like the ones that we heard about tonight. And above all, we need to challenge the status quo. And if you think about how we got to this place and where the internet is today, it's because there wasn't enough of that questioning up front. We ended up with a model online funded by advertising that led to the unintended consequence of highly sophisticated, effectively surveillance systems being built to monitor and track everything that we do online. And I think what we're moving to is just orders of magnitude more complex and requires deeper, more critical thinking before we get there. The fundamental question for me that was asked on stage was, what is the Australia that we want to build? And this is where Data61 comes in. So we play a small role in helping Australia create a data-driven future. And our philosophy is to be generous collaborators. We do that with the amazing partnership that we have with ANU. Thank you. And the vision and foresight for ANU to fund and help found the predecessor organization that has become CSIRO's Data61. But we won't do this alone. And another thing that really stood out for me in the conversations was the need for speed. I think it was Genevieve that talked about the agile methodology and the need for speed. The world is not waiting for us. If we're going to take it, we need to move at pace or better. And just to finish, I can tell you that it's an absolute privilege to have been involved with Genevieve in discussions, as her thinking is really galvanized over long before we started the conversations. But it's been an absolute privilege. And embracing her original thinking, we, I think as institutions but also as a country, need to do everything we can to help lift Genevieve's Institute. We're thrilled to be a founding partner of that, to clear obstacles. And I think we all need to be a bit more comfortable in asking the uncomfortable questions. Can Australia be in this game? Of course, we can. No question. Thank you. Let us do it, and let us be done, so we take this mic off. All right, ladies and gentlemen, that brings today's proceedings to an end. But of course, we have much to look forward to. So be ready for the Boyer Lectures and be ready for agency, automation, and assurance in some order. Thank you, Arch, everyone.