 All right, no pressure then, that's what I like. Apparently, intellectual abseiling is now our destiny for the next hour. Listen, I too want to acknowledge that we meet on Ngunnawal Country and I wanna thank Wally for that incredibly gracious welcome to country. I've spent 30 years living in the United States and one of the great pleasures for me is to come home and get to be part of that ritual again. Whilst America may also be a country that is previously occupied, it is never acknowledged. And there's something really powerful about the fact that we get to sit here and remember that every time we open an event. So I know sometimes it feels like a ritual but I actually think it's a really important one so I wanna thank Wally for that. I also wanna thank Brian for his gracious and pointed introduction. Most of you don't know quite how much is sitting underneath Brian's notion of saying we had a few hiccups. Can I actually have a quick show of hands of who was here last year when we kicked this off? Excellent, recidivists, that's what I like. Good, that I have things to tell you for the rest of you. It's a little bit of a new adventure and a new journey. As Brian said, a year ago on Monday we met in a building just across the road to kick off the first of the ANU's Innovation Institutes. These were designed to be ideas incubators and a place that we could build new sorts of knowledge, create new kinds of experiences and find new ways of connecting those experiences up. And Brian's right, it's been an extraordinary adventure. It is certainly the case that were I to write the comparative ethnography of my 20 years in Silicon Valley and my two years at the ANU. I fear it would be a bestseller for all the wrong reasons. So let us imagine that potted ethnography is waiting. That's not this talk. What I thought I'd do instead was reflect on where we've been, where we are going and the moving pieces that are involved in it. And in fact, I wanted to start where I left off. So those of you in the room may remember a year ago, my very last PowerPoint slide was this. I said, we're launching the 3A Institute and we need everyone's help. And I want to stop and say thank you here because it turns out so many of you came and found me. I want to thank all the kind of the people that have been incredibly supportive to me. I have to thank my Canberra family and my mother, Diane Bell, who was in the audience again for all the ways in which they have made it easier to come home than it should have been. I need to thank Brian and the rest of the administration for being supportive even when I think they worried. I might burn the building down. Brian may not remember this, but when he interviewed me for this job at the very end of the interview, I said to him, listen Brian, I'm capable of being a good girl. I scrub up nicely, but there will be a moment when I'm gonna come into your office and I'm gonna say, Brian, I have to burn the building down. And Brian looked at me and very sweetly said, I keep kerosene in the bottom drawer. And I thought this is a man I may have to move to. And I turned up. So I want to thank the administration and Brian in particular for all the ways in which he has been incredibly good. I also want to thank the rest of the academic staff at the ANU who I know have found me to be a slightly query awesome creature and not what they were expecting, but who have been very, very good to me nonetheless in both public and private ways. I have to thank my colleagues in the College of Engineering and Computer Science and in particular my dean who is, as my team calls her, a minor marvel. I was the minor because she is tiny, not because she should be underestimated in any other way. She is in fact a force of nature and it is both my pleasure and my privilege to get to work beside her every day. And I need to thank all of you who weren't in any of those places, but found me anyway. There are people in this room who have written me notes. There are people in this room who have turned up in my office and offered help. There are people who sent me pronunciation guides because apparently my Australian English is not what it should be. I thank you for that. Also, although I don't know what to do with it, I will always say data badly. So I wanted to just say thank you because I know that it is something to get all of that help from everyone. But it turns out we're not done. So I'd like to just to get to say thank you and stop, but the reality is that's not the case. Because it turns out a year ago for those of you who weren't here, I started with this chart. So this chart was published by the World Economic Forum at the beginning of 2016. It was the World Economic Forum's attempt to make sense of the last 250 years and to stabilize the future. Is their attempt to basically say, listen, the world is really quite neat and tidy. It's had three really clear waves, that first one with steam engines, that second one with electricity and mass production, that third one with computers, oh, and there's a new one coming. Now, most of us in the room can look at that chart and see some immediate deficits. For me as a cultural anthropologist, deficit number one, no people, just technology. For those of us who are associated with universities, deficit number two, each one of those waves of technology necessitated the creation of a generation of scholars and practitioners to manage the technologies that arose there. Whether that's engineers and engineering in the first wave, frankly, both the branching to electrical engineering but also business and management schools in the second wave, whether it's computer scientists in the third. Each one of those waves of technology created an interesting set of challenges because it turned out to go from being an individual piece of technology to something that operated at scale and operated at scale safely. Required people that didn't exist when the technology came into being. If you just pick that very first wave and you think about the moment when Newcomen Watt are building atmospheric and steam engines in England, there were no engineers. There were blacksmiths and wheelwrights and guys who knew how to bend steel and people who were experimenting with water under pressure. But there was no one who actually knew how to think about the whole system, let alone imagine what would happen when that whole system went mobile and would require train tracks and thinking about adverse camber and then imagining speed and safety and braking distances and thinking about how you managed time in order to have shared train tracks. There was no one who did all of those things. They were just steam engines. And the reality is each one of these waves have not occurred, of course, in neat chronological order. Sometimes they've been mapped on top of each other. And for me, the impulse that keeps driving the institute is looking at that thing that the World Economic Forum says we're entering into now. What they call the fourth wave of industrialization or the fourth industrial revolution. And they've always said it was characterized by cyber-physical systems, always, as in for two years, they have said that. And at the time when I first saw this chart, I didn't know what that meant. They've subsequently rebranded this as the age of intelligence. I'm not sure that helps for several reasons. It raises questions about what came before it, just as a starting point. Nonetheless, I think it is safe to think about cyber-physical systems here as meaning an entire class of objects. And really, I think of this as the challenge here is about how do we get artificial intelligence to scale safely? Where the best way I can imagine this is to think about artificial intelligence is remarkably like that steam engine from 250 years ago. It means one thing when it's stationary on top of a mine, it is something completely different to imagine it is now going to be inside objects that are running around doing things. And really, for better or worse, that's one way to think about cyber-physical systems. A year ago, I stood here and said, so listen, here's the thing, we built engineers. We built electrical engineers and computer scientists. What the hell are we building here? Like, what will that necessitate and who's gonna do it? And I put a stake in the ground and said, I think it's gonna be us right here. Oh, and by the way, I think what we're managing here if before we were managing, well, basically mechanized in industrial systems, managing capital and managing computers, I thought what we were managing was data. I think that we were wrong about that or I was wrong and that in fact what we are talking about here is indeed managing cyber-physical systems. So what do I mean by that? Well, Brian just tipped his hand to owning one. Brian drives to work every day in a Tesla. A Tesla is an early version of a cyber-physical system. It is a physical object with a computational engine that doesn't drive it necessarily, but certainly can control it. And when Brian's Tesla is in autonomous mode, Brian is not in control of that vehicle. It is controlled by a set of software and a set of algorithms and frankly a little bit of early artificial intelligence. And when you think about cyber-physical systems, what you should think here is an emerging class of technologies where we are talking about a number of things that at the moment we wouldn't think were all the same thing, but one day will be. Drones, autonomous vehicles, robots, the smart elevators that I hate in Sydney. Those are all early instances of cyber-physical systems. And frankly, it was my contention a year ago and it remains my contention today that those systems require something new. It's not gonna be enough to have computer scientists and electrical engineers. We need both of those. It's not enough to say that computer science will need ethics, because it does and that's not sufficient here. And it's not enough to say we need to add some design thinking to engineering. That would be good, but still isn't the answer. The reality is this class of objects require something new, something that doesn't exist yet. And so I stood on a stage not far from here a year ago and said, right, so we're gonna build something new. And I said, there were four things that I needed to do at that moment in time. I said, I can't do it by myself because that's depressing. Also, you don't scale that way. Also, I didn't wanna do it by myself. So I said, I need to find some people. And at the time I thought that would be an interesting challenge in no small part because I needed to convince them to move to Canberra, which was mean. And that was the response you all had last year. Just why wouldn't want people wanna move to Canberra? It's great. I'm thinking it's been a long, cold winter. So problem number one was find people who wanna work on the problem and who would those people be? The second bit was, well, it's all very well and good for you to say, cyber physical systems, but like, do you have one more click down on that? Like, do you have anything more you can say there? And then I said, and it's not just enough to identify what the questions are, you're gonna have to find ways to test and verify them and what would that look like? And then, oh, by the way, because in fact this is happening at the Australian National University, not at a company in Silicon Valley, it's not just enough to build the knowledge, you actually have to train people, create the next generation of scholars and practitioners, find other people who wanna take that knowledge out into the world. Brian has been talking a little bit recently about the fact that the role of the Australian National University is about intellectual leadership, not just academic excellence. And I look at that and I think this is a place where that is true. It's actually about how do we create something that's a little bit different and a little bit new. So I gave this myself as my running list and I thought I'd use this to architect the rest of this evening to show you what we did. So step one was find people. So a year ago I went, I'm gonna need to find some people. And then I had the problem of going, if you're building a new discipline, you can't very well say, I'd like the people that do cyber physical systems because they didn't exist. And I was lucky enough that already a year ago there were two people in the room who had found me and a third who wouldn't leave my office and read every book I gave him. And I kept giving him more books because I hoped one day he would go away and he didn't and I'm deeply grateful for that. So we put out a call, those of us that were then together to find new people. And it's an interesting process at a university to put out a call for academic jobs in a college of engineering and computer science where you don't require engineering and computer science as the degrees, where you suggest that you're more interested in people's attributes than their disciplinary excellence. And so when I said I wanted people who were team players that were collaborative and wanted to build something, we did have an interesting argument about whether that was counterintuitive to academic excellence. I, of course, believed it wasn't. And so as of today, astonishingly, there are 10 of us and they're all in this room so I want them all to stand up because you don't get to sit down, Rob, Zach, Liz, Caitlin, all of you here. So what you need to know is this group of people represents what the next generation of university work would be because standing amongst you here, you have a nuclear physicist, you have someone who's expert, it's teases in strategic foresight, you have at least one person who has an undergraduate and a graduate degree in civil engineering as he puts it to me, concrete one, two, and three, as well as a PhD in sociology. You have someone whose undergraduate degree is in computer science and whose PhD is in human geography. You have people whose backgrounds are in interdisciplinary studies, in the law, in biomedicine and in anthropology. Oh, and that's not me. You have people in the room who can juggle. You have people in the room who do theater and improv. You have someone in the room who has spent five years doing scientific diplomacy. Obviously not for the Australian or American governments. And you have at least one person here whose background is in feminist theory, just like me. And it turns out if you wanna build a new world, you want all of that. You want people who range in age from 25 to 51. You want people who were born in Australia and born in all other parts of the world. You want people who have had careers that aren't even started yet. And I'm really lucky because all these people said yes to me. And you're all really lucky because these people are who's gonna build the next world. So will you join me in thanking them? And also, we're ritually embarrassing them. You can sit back down again now. So that's my people. So the rest of this talk, I now get to say we not I. And that makes me more happy than you can possibly begin to imagine. Because once we got together, we realized that the next question was, well, okay, what's your body of knowledge? And Brian will remember, as will Eleanor, that a year ago, about now, because we're about 72 hours out from announcing the Institute, Brian's like, do you have a name for that applied discipline yet? Come on, you know, you've got a name. You're just keeping it from me, aren't you? What's the name? I'm like, Brian, I have no idea what to call it. Brian's like, that's not helpful. What are you gonna call the Institute? I said, well, I'm gonna name it after the three questions I think I have. And Brian took a very deep breath and went, okay, and he's learned those three questions really well, autonomy, agency, and assurance. And I thought those were the questions. Here's the problem. A year in, two things are still true. I don't know what to call the discipline. Despite Michael Walsh, if you were in the room, your best efforts here, when the people, the babies at you as a child turn up and go, I think I can help you name that, you wish it were true. So far, Michael and I haven't got to an answer either. So I still don't know what it's called. We're taking bets. Brian and I think it might be cybernetic engineering. And I kind of like that. It gets to celebrate cybernetics, which has a delightful history. It has engineering in it, which honors my dean who likes to tell me two things. One, that I'm building a new school of engineering, and two, that she's always right. So if it were just cybernetic engineering, she'd be very happy. I'm not convinced about that. As Brian and I know, it's an aim already taken, but it is taken in Norway. So, you know, we may take it back. So, cybernetic engineering. What I also happen to know is it turns out that three As weren't enough. And I've got two more letters, Brian. I'm really sorry. I promise not to stick them in the institute name, but there are two more questions. So, remember when I said, cyberphysical systems. So imagine a class of objects that physical, powered by computing, that computing has an element of artificial intelligent capacity, which means it doesn't always have pre-written rules, which means there is some capacity for those objects to move by themselves. Which means in some ways the very first question that we came up with the very first A was the obvious one. It was the A for autonomy. And I think what is clearer to all of us now than it was to me a year ago is that what we say when we mean, when we say autonomy is really messy. Autonomy is one of those words that has extraordinary semantic slippage in English. I say autonomy, you think, sentient, self-aware, kill John Connor, or the doctor, or killer robots. It basically goes from self-aware to sentient to killing all of us in the room. That's something that happens in English. It doesn't happen in all other languages, but we can't hear autonomy without putting it into this much larger context. That's one problem. You might argue that autonomy is a term well-rehearsed in at least one discipline that is not my own. That would be philosophy. It's also the case that in tech companies in Silicon Valley and elsewhere, everyone has an autonomous project of some description and every single one of them is using that word differently. They have architected autonomy differently. They are implementing it differently. They are securing it differently. Autonomy is a word that now covers many technical solutions. Dr. Parkinson in the APS doesn't like it when I say autonomy is a bit like the word innovation, but autonomy is a bit like the word innovation. It is the case that everyone now talks about autonomous vehicles. The Tesla solution that Brian has is one data is collected locally. It's edited at a central node. Anything that is interesting or useful is pushed back down into the system. Those systems operate semi-autonomously locally. The Volvo solution is that all vehicles communicate to each other without a central hub, but only certain kinds of information. Both of those are called autonomous vehicles. They're implemented completely differently. They have different challenges for how we would think about the network, how we think about the compute, also how we think about we might regulate those objects and who gets to. And frankly, it is clearly the case that what it means for objects to be autonomous is different in different parts of the world. So now we have different versions of autonomy lurking inside technical objects. So I still think the first question you have to ask of cyber-physical systems and the first question that this new applied science needs to answer is what does it mean to talk of autonomy and how would we build systems that embodied that? And if you think in the vehicle space, this is really about how would a vehicle function without pre-written rules and what would that look like? If you imagine you can resolve that question, the second problem really becomes one about what I would call agency or you might want to think about as controls and limits. If the system is autonomous, i.e. it can act without reference to some other agent, what are the rules that it's operating on? And who gets to determine those? Do they sit inside the object? Do they sit outside the object? Brian's Tesla isn't the best example of this, but if this were an autonomous vehicle, and it were here in Australia and it were allowed to drive on our roads, it would need to have at least two different modes, a mode for Melbourne and a mode for everywhere else. For those of you who grew up in Melbourne, you know one of the things that characterizes Melbourne, particularly in the CBD, is a willingness to fecklessly do something called a hook turn. Those of you who have done a hook turn know that it is a moment where you take your life into your own hands and you can feel your heart in your throat and you think, why do people do this? If you had a vehicle that knew how to do hook turns, it shouldn't do them anywhere else. A hook turn in Sydney would create chaos. I'm willing to bet a hook turn in Wagga would be a subject of inordinate scorn. I think it is safe to say, imagining how you build multiple sets of rules into objects and how those rules are negotiated, isn't just a matter of location sensitivity, but also context. And if you were to think about, again, autonomous vehicles in the case of Australia, it is certainly the case that there are institutions who might want to be able to control all vehicles on the road outside of those vehicles. Think of the country fire service or the SES who might want to say, we need all vehicles off the road so we can get that fire truck through. Who gets to decide who gets to take all vehicles off the road and under what circumstances? Oh, and by the way, how do all of those rules get made visible to the people who are encountering those systems? Do they need to be made visible? But if so, how would they be made visible? What is the moral equivalent here of the P plates and the L plates that we see on cars to warn us, do not get behind that person while they try to parallel park? What will that look like here, right? Third set of questions are still the same questions we had a year ago, Brian, which I think are the questions about, if you are building cyber physical systems, how do we think about the bundle of things here that we label under assurance because otherwise the list includes privacy, security, trust, risk, liability, ethics, manageability and explicability. Because it turns out if you start to imagine systems that act with a set of rules internal to those systems, how do we now think about what makes those systems safe and what makes us safe? Who gets to decide that? Is this just a matter of implementing an actuarial table or is there something more to it? What does it mean to think about machine coding ethics if we could do that? What does it mean to think about how you might need to reimagine liability? Who is liable for these objects? Who decides that? How are these objects scrutinized? What does it mean to think about a series of objects that are in some ways in physical spaces where the safety is not obvious? How do all of those things get negotiated? Where does the role of government sit? Where is the role of public regulators sit and where is the role of society? These again are questions that sound like social questions. They're also technical questions. It turns out as we start to think about cyber physical systems, particularly using artificial intelligent technologies, we start to think about learning techniques. How will these systems learn over time? That implicates two things. One, the stream of data that goes into those objects, but also the learning techniques those objects are using. At the moment, most machine learning is really just repackaged statistics. So if you know how to do statistics, machine learning is actually not that mysterious. There is, however, a class of techniques inside machine learning of which deep learning is the first, so basically unsupervised learning. Whilst they offer an ordinate potential, one of the challenges is it's actually very hard to know how the computational object has gotten to the decision it has gotten to. And more importantly, it's very hard to have that object get to that same decision twice. And more importantly still, it's very hard for that computational object to explain the point at which it changed state and the point at which a decision was rendered. So the idea of traceability, of backtracing, of algorithms that can crassly put unbox themselves is actually a technical challenge. It's not just an ethical challenge. It is a challenge for computer science. It's a challenge for people who work in this space because it turns out these objects are gonna operate in highly regulated areas, whether it's cars on the road, whether it's financial services systems, whether it is my much hated elevators, whether it is things that sit inside objects that are in medicine, finance and the law, these are all highly controlled spaces. And so having algorithms that can explain how a decision was rendered is the only, in some ways, option here. Because heaven for fend, we be in 2025 at the next Royal Commission into the banking in Australia and the CEOs of the Australian banks to a woman, I say hopefully. Find themselves saying, I don't actually know what happened to you super, the algorithm did it. I think we would find that as unsatisfactory is we have found the current set of answers. And this is a set where there are technical questions that have outwork. So the first questions remain the same. When you talk about cyber physical systems, we need to resolve the issue of, are these objects autonomous? If so, what do we mean by that? How do we plan to implement it and secure it? The second set of questions remain the same too, which is what are the rule sets? Who gets to determine them? Who will update them, modify them, make them public, scrutinize them? And the third set of questions too, how will we think about safety, security, ethics? There's two more questions that have become really clear to us as we have spent the year in some ways talking these things through. The first one is about the necessity for metrics. So if you think back to that chart from the World Economic Forum of the four waves of industrialization, steam engines, mass production lines, electricity, computers, one of the ways all of those systems were measured was by productivity and efficiency. Those were in some ways the dominant metrics of the last 250 years. Did the thing do it faster, cheaper, for less, in some way, right? They were dominant metrics. I think there is an argument here to pause briefly and consider what other metrics we wish to see at work here. Part of the reason I think that has to do with the fact that having spent 20 years in a manufacturing company, I know you make what you measure. And so thinking about what the measurements are drives what the behaviors are, or at least that is one way of thinking about it. In this instance, I think if we look at some of the early examples of cyber physical systems, autonomous vehicles, for instance, the language that we hear there from manufacturers and regulators isn't one about efficiency, it's one about safety. These objects will be safer than humans, not necessarily more efficient than humans. Certainly the case if you listen to some of the conversations about drones, whether they are in the air or on the water or indeed under the water, it's about can we see more of the world than the human can see? So can we extend our capacity to know things? Can we get more data? I'm not sure again that's about more efficiency, it's not necessarily more efficient data, it isn't just volume. It's also the case if you think back to that prior example about certain kinds of machine learning techniques, there are things we already know emerging in the artificial intelligence space that are issues in terms of a different kind of metric. So according to one of my colleagues in this year, so in 2018, about 10% of the world's electricity budget is spent in server farms. So 10% of the entire world's energy is spent in server farms. That's an extraordinary number. It's more remarkable if you imagine that we are moving into a world of greater technological density and those devices at the moment, even if you could solve the energy constraints of them, the density and energy trade off is quite high. It is also the case that some of the algorithms that are already at use, I'm thinking of Bitcoin mining and my other things and also deep learning are techniques that are actually really energy intensive. So they require a great deal of energy to use those techniques. It is interesting to contemplate what would happen if we decided for instance that a particular set of artificial intelligent technologies could only be used if they sat inside a particular energy budget. What it would mean to think about sustainability or energy requirements as a metric here is fascinating to contemplate. We didn't get to thinking about efficiency with steam engines and railway trains until there were very few trees left that you could chop down. And after they had moved away from a steady source of coal, we actually know that there were an inordinate number of problems that had efficiency in terms of environment spend being the metric. You might have developed a different solution much more quickly. So starting here and saying, what is it that we want from these systems? Maybe one of the things we want isn't just about efficiency, it's actually about energy, would be an interesting starting point. Likewise, would it be to say that we are interested in a better quality of data, we're interested in different kinds of outcomes, this is about safety. All those are conversations we should be rehearsing and I think thinking about what the metrics are for the systems as they are being built is an interesting fourth set of questions that at least allow us to interrogate the existing objects and new objects from a different kind of lens. And then last but by no means least, I think what's been most striking to me and I hope to the rest of my team is that it's one thing to talk about systems and the computation that sits within them, but these are also physical systems. So they are systems that we will encounter and that we'll encounter each other. For computing over the last 60 years, the interface to all of those systems was incredibly thin, was a keyboard, piece of glass, maybe a stack of punch cards, recently maybe voice, maybe a little bit of gesture, but it's been very, very in some ways narrow. Imagine now we are talking about cyber-physical systems that bring you to work every day, you sit inside them. They are cyber-physical systems that take us from the ground floor to the 30th floor. They may be cyber-physical systems that are in the air we don't even see but that are seeing us. They may be systems that are talking to each other about us. Not quite your ancestors Wally, but certainly an interesting world to contemplate when a building like this might be talking to the local power authority to talk about how much energy is being expanded in the building in order to drive a different energy budget. So imagining that all these systems are going to have interfaces, where the interface isn't as simple as the human-computer interaction, but might be something bigger and more complicated. And here it's also a matter of not dragging 20th and 21st century metaphors for interaction with us into the new world. I do not want to contemplate a world in which every encounter with a cyber-physical system commences with me attempting to remember what my 12 digit password is that has an uppercase and a lowercase, a hexadecimal and a number. That would be bad. And yet we know that when we think about security, that is in fact the dominant notion we have is that we should have hard passwords. I don't want a world of hard passwords. I also know from having spent time in the security space in the United States that the notion of using your biometric ID as an authentication isn't very helpful either. The United States government as of two years ago had this most single significant breach in their data layer, and what they lost was 20 million records of individuals. That was one thing. The second thing they lost much more importantly was a million fingerprints, or more to the point, 10 million fingerprints of one million individuals. And whilst it is the case that we can change our passwords or update them every 90 days, it's a completely different thing to think about how we might update our fingerprints. So imagining how it is that we want to engage with these systems, complicated too. So what we know a year in is that there are at least five questions about cyber-physical systems, not three, because I really am quite fond of Brian. I'm not gonna rename the institute Amy, which is I think what this gets us to, because that, whilst attractive, is not helpful. So we will still be the three A's plus two. So remember I said after you build a body of knowledge you should verify and test it. So that's the second thing we've been doing. Over the last three to six months is thinking about how do you test and verify those questions, right? What would it mean to start to say, well that makes sense to me, that it makes sense to anyone else. So part of what you do is go and spend a lot of time talking to people and going here are the questions, do they resonate? And once people get past the fact that I keep putting a picture of Furby up there, because why not, they are willing to start to say yeah maybe. So part of what we decided in addition to having conversations with, as Brian listed, our peers around the university, our peers in the broader community, people in both the tech field in Australia and the US and across other sectors, in addition to having those kind of conversations we also thought it would be really good to conduct qualitative research. And so to that end, starting about two and a half months ago we've been kicking off slowly an initiative to conduct qualitative research in five field sites around the world. We wanted to find people that were thinking about cyber physical systems. Maybe they were building them, maybe they were regulating them, maybe they were imagining what they could do with them, but we were interested in thinking about beyond autonomous vehicles and drones in those senses, what were people doing and how might spending time with people who were building, working through and using those systems help inform our questions better. And while I don't wanna get into the details of who the partners are, suffice it to say it is global. It includes scientific research organizations, it includes at least one arts and creative organization because I think it's really important to get beyond imagining this is all just about productivity tools. Imagine it includes technical systems that are in the air, underwater, happily for Brian and I in space. Turns out building semi-autonomous and autonomous things for space is much harder than almost anything else for obvious reasons. You can't pick it up, smash it around and reboot it. It's too far away. So we have the interesting challenge of now spending time in those organizations of getting a sense of what people's imaginings of these objects are about what they hope they're gonna do with them, about what they're already doing with them, about what the challenges that they are already encountering. And some of the earliest conversations here are kind of the obvious ones, right? Everyone is grappling with the, what set of rules do you use? In both pragmatic sense and a technical sense. Everyone is grappling with the, well, who the hell do we hire to do this work? Everyone is grappling with the, that thing didn't quite work the way I expected it to. And those are the right kinds of places, right? So I hope a year from now, in a different building somewhere on campus, we'll get to talk about what that set of research has produced. But for me, it was a way of starting to say, rather than trying to build a system and build an applied science and isolation, we should build it in conversation and in dialogue. That was hugely important to me. And then you remember, after testing and verifying, there was training a whole new generation of practitioners. And you'll be happy to know, Dean Huntington, it is past five PM. So we are training a new generation of practitioners. In order to do that, we decided to do a couple of things. We got together all sorts of people from all over the university and went, come and hang around with us. Also, please come and make things with us. Also, let's see if any of those ideas scale. Good news is they did. And we had a whole series of extraordinary moments led by a number of wonderful people in this room where we actually spent time trying to work out whether there were ways that we could talk about those questions and talk about those objects and talk about the consequences in a way that started to get us closer to an academic encounter or an academic endeavor. When Brian and Eleanor and I kicked off the institute a year ago, there was a white paper. In the appendix to that white paper, I said we wouldn't get around to educational activity until 2022. That seemed like a really long time from now. And so 10 weeks ago, I went to the vice chancellor and said, I think I can do it quicker. And Brian, to his everlasting credit, went, yeah, okay. Smiled at me because he reasonably went, you've met the A in you, Genevieve. That is not what's going to happen, but go your hardest. And go our hardest, my team, and I did. So over the last 10 weeks, my team and I have built out a series of courses that we hope we can use as a way to test the applied science. Why wait till it's perfect? When the clearest lesson I have over the last 20 years is, iterate, learn, fail, pick yourself up, and do it again. And so we built out courses. We built out courses that we think have something distinctive, unique, and wonderful in them. That for me is about how do we move from problem solving to question framing? How do we think about teaching in a slightly different way? And how might we create that first generation of practitioners? And so in a manner that is deeply complicated, the arcana of which will sit within the rules and walls of this institution, what I can tell you is that, starting on Monday, the 3A Institute will be taking applications for its first cohort of students in 2019. And the mic died. No, that didn't, good. That would be mean. Can't kill it now. It's on a slide. Maya's gonna put it on the internet. Exactly, right. So as of Monday, we will be taking students. We'll be taking them in a complicated manner, but we will be taking 10 students as our first, hopefully, experiment in education. We're hoping people will come and spend a year with us and co-design the courses with us. We've built them, but we want people to, well, break them with us and make them better. So Monday morning, we will be live. We'll be accepting applications. Please send everyone you know. I know it's Canberra. That's the deal. They will have to come here. But I'm incredibly excited. And I know that my colleagues in the academy here are excited and I know my colleagues in the administration are excited too. And it should have been in 2022 that we said this, but I like that I get to say it in 2018 instead. So we're gonna start making a new generation of practitioners and scholars go. The best thing is my team only found out about this three hours ago. And the vice chancellor only found out about it. An hour and 10 minutes ago. And Eleanor made me promise I wouldn't get to this slide until 5 p.m. when the system refreshed. Because now it will be live in the tool, but it wasn't until 5 p.m. So I've stalled almost enough now and I can go on to the next slide. So last but by no means least. For me and my team, I think what we need to get done is really clear. When I look out over the next year and beyond, I know it's really just about three things. And it's about the same three things it's always been about. How do we create a new body of knowledge that's about both knowledge and praxis? Because I think it isn't just about research. It's actually about the doing as well. It's hugely important to my team and I that this doesn't happen in isolation. I don't wanna be sitting in a room somewhere talking to ourselves. I desperately want this to be a broader conversation with everyone. And last but no means least, I don't want the knowledge that we're making to be locked up inside the institution. I started thinking the other day, Brian, that universities were a bit like seed libraries. We got to hold seeds, but really what we should be doing was sending them out into the world so other people can cultivate them. And that's what my team and I want. We wanna build something. That's why we all came here. We wanna build something. We wanna build something that matters and we wanna build it at scale. And that means exactly like a year ago, I still need everyone's help because I scaled safely in a year. I've gone from one to 10, but we're gonna need a little bit more to get it really done. So I asked a year ago for the help of everyone in the room and everyone out there on the internet. I don't need help with my pronunciation anymore. It's kind of a foregone conclusion. I do need help with all of this work and this heavy lifting. If there are people you think we should be talking to, if there are ideas you think are interesting, if you just wanna come and spend time with us, find us, connect to us, we have a mailing list, we have a website, by Monday we'll be taking applications. I'm just gonna keep saying that so that that makes it true. On Monday we'll be taking students. So please, all I know is that a year ago this was the right place to do it and it was the right time and a year later those things are both still true. I sometimes get asked 18 months in why I would possibly come back to Australia and leave Silicon Valley and why in some ways the inference being I would be foolish enough to do this on the edge of the universe. Here's what I know. The only way you could build this is not to be in Silicon Valley. I have a great deal of respect for my colleagues and my friends there and I miss a great deal about my life there. I also know that the time scales there are very, very short and short term there is oh maybe this quarter and I need an academic short term of about a year for the next piece of work to get done. But I hope I won't be doing it by myself. I know there are at least 10 people in the room who are doing it with me and I hope there will be a whole lot more. So with that I'm gonna stop and say thank you.