 Again, let me welcome everybody. Welcome to the future transform. I'm delighted to see you here today We have a great guest who is an astonishing thinker. I'm really looking forward to diving into her topic Today's guest has been writing about education Technology for a long long time. She is an educator a consultant a writer and most recently She's been turning her attention to the question of the knowledge ecosystem and how it's changing and If you take a look at on the bottom left of our screen You'll see a kind of tan colored button that says Helen Beaton's newsletter You can find a whole bunch of issues and the most recent one is an Extraordinary probe of what she considers to be some of the threats to the knowledge ecosystem and she focuses closely on AI It's a fascinating fascinating series of articles that I really can't recommend enough And I'm absolutely delighted to welcome Helen beat them to us on the future transform I'm especially glad because she's coming to us from the uk where it is getting to be close to tomorrow So I'm especially glad that she can join us Hello, Helen Hi Brian, can you hear me okay? Perfectly. I'm so glad you can join us Thank you, Brian. It's great to be here. It's great to see some people. I know and loads of people I don't know as well. Oh excellent. That sounds perfect. That sounds ideal. In fact Uh, where are you today? Where are you coming to us from? So I'm in the southwest of england. I live in a very rural spot We've just got a bit of a dusting of snow actually, which is quite a new for us. Yeah Indeed it is indeed. You've actually outpaced us. We have nothing on the ground right now except attitude so Helen, you know, I mentioned to you we have this Custom on the form that we we like to ask people to introduce themselves by describing Not their past but their future and we're curious. What are you going to be working on for the next year? What are the big projects and the big ideas? Well, Brian, when you asked me this the first thing that came to mind is at this time of year I do a lot of community organizing, especially around food resilience So the thing that's on top of my mind at this time of year tends to be what are going into the shared allotments What jobs have we got for the young people? You know, what's going in the veg boxes and in the community cafe So until I'm provoked by you Brian my horizon of the future is something like the life life cycle of a vegetable really So that's always on top of this time of year Yeah, yeah, it should be for all of us, shouldn't it? You know, it's the great future You know what we can put in the ground is a great future And I guess I've got two kind of book projects coming up So this month i'm delivering the text of an edited volume of chapters on democracy and human rights Oh, that's actually that's actually in memory of my dad, but I have a chapter in it Which is about democratic futures. So I couldn't not mention that And that's not really the sort of big picture around elections and you know deep fakes and stuff It's much more around that kind of community resilience So how can we do small-scale work that helps people build kind of democratic confidence Participatory skills and you know, does that give people resilience to the kind of, you know, the big negative narratives around Conspiracies and extremism, you know So so that's a kind of interesting side hustle for me that look on democracy and human rights Although I have to mention Wayne Holmes amazing report on AI through a democracy and human rights lens Which I will put in the chat window in a minute Excellent, but I guess big book project which probably is what maybe makes most sense of me being here I mean, maybe we can find some threads through vegetables and democracy But I guess the big book project would be I'm finishing my own book of research Which is called teaching critical subjects and that's based on research with 50 or 60 higher education teachers talking to them about what makes their students critical What makes them critical with technology critical of technology And especially how being critical is changing in their discipline around, you know The new knowledge regimes that technology is coming in with so, you know That's the kind of big project that I'm trying to complete on and and I have some podcasts a podcast series on Genitive AI of my own coming up that you know that I'm really excited about B2 this sounds amazing. Um, you've got all people are messaging with all kinds of of fan notes for you Ed Webb our dear friend is a political scientist at Dickinson College and he was born in southwest England so It's it's possible that you you have a lot in common And uh, he says a dorset somerset border Yeah, well, you can come across to Devon and and you know inspect the vegetables anytime it I'm looking forward to expanding a couple of our of our beds and putting in a whole new series I'd like to expand it to some beans to grow here so, um Well, first of all, this just sounds like a tremendous amount of stuff and and uh, I your your discussion about resilience is so crucial for Well for us all but especially for higher education And also I think about democracy I Friends, I have a whole bunch of questions to ask if you're new to the forum the way this works is I ask our good guests a couple of introductory questions Um, and then I try to get out of the way to let you all ask your questions So as Helen and I start talking we talk further, please think about your own thoughts What kind of questions rather that you'd like to put to her? Which topics she'd like her to uh to expound upon? Uh, one of the things about your if let me just share this again because I've been Sharing it all over the place and I want to make sure people get a chance to see it. Um, there's a One of the links you make in this recent sub-stack post that is so powerful for me Um, is that you you link our apprehension of generative ai to a model of the current internet They are not the internet of 20 01 but the internet now of enormous platforms large companies surveillance-based business models Um, and you point out that this is one of the great threats the ai presents to the knowledge ecosystem that It may end up becoming platformized and it may degrade the performance on the platforms that it's largely black box Yeah, we don't know what The google is doing to make barred work differently or what open ai is doing for chat you bt or dowie And that it's also a part of the what the shoshan zeeboff calls surveillance capitalism model And i'm wondering i'm just saying this in a hurry first of all Am I reading you correctly? Second of all if if you could expand upon that a little bit so that people get a Sense of what you're actually thinking rather than my sketch No, your sketch was great brine. You know, you're so sharp I think the reason I went back to look at the history of the internet was You know twofold one is because we have this narrative of an absolute rupture, you know Here comes ai or generative ai and everything must be me We have to rethink the curriculum. We have to rethink You know The quality of the technology we're dealing with has radically changed And I thought it was really interesting rather than doing that to think about how you know You made me sound even older than I am in my history of long writing about the internet But how excited we were, you know, how excited we were with the prospects of you know, global citizenship based around global access to valued knowledge And looking at how gradually some of that kind of amazing flowering of possibility or the diverse business models or the diverse ways of being online there were you know became First of all less safe, you know through to social media But then particularly this platform the platformization and I think one of the interesting things about The new generative models that are coming in is that it's platform first So yes, there is plenty of interesting open work going on around the edges of those models And I don't think we should discount the possibility of that You know, I think in europe particularly there's some interesting public projects going on There's a huge ecosystem of tools and those guys But I think unlike the internet which started out distributed and became a much more concentrated capitalized model Nobody can get into building these models unless they're already hugely capitalized. They're incredibly expensive to build By definition concentrating knowledge, concentrating data Text images, like you know, that's that's that's how they work So I guess my thinking was how can we bring some of the tools we have Theoretical and practical to build resilience against platformization But think through this new kind of new form of platformization You know, what I what I described it as is a kind of third mode of concentration But it's a constant of content rather than the concentration of data or of power of compute We've seen the concentration of content and asking, you know, what that might Do you have a wonderful bit about um Well, I mean sorry wonderfully phrased but describing something awful in terms of threats to writers, which I of course feel keenly And describing how there's now this kind of torrent of content being generated Which has all kinds of problems to it and you have this great line about Trying to figure out what no tropics amazon is subscribing to when they They limit writers to publish only three books a day Which is a great passage in in the chat We have some competing thoughts from our good friends. John Hollenbeck Things the naive hope for a new democratic world is important to go back to And then Mark Corbett Wilson complements this by referring to this as the enclosure of cyberspace So that's referring to of course the english history of enclosing the commons for private purposes and Well given given all of that um I guess I'm going to circle back to this question later on. Let's go a little bit further, but How would you recommend individual academics approach generative AI now? um, I mean with care would be a pretty good way of saying this but but but also How should we proceed should we avoid them? Because they are so riddable the problem we described Or should we try to organize ways to develop better alternatives? Well, Brian and I may say so. I think that the we in your sentence needs a little bit of unpicking So one of the reasons why I wanted to write about the ecosystem Is because I don't know how it is in other Countries in the uk We've had a lot of guidance around this which has been very focused on first one individual users So some of what i'm doing is trying to expand that view to think about Data workers and data subjects and how these new ecosystems affect non-users But when it comes to users in higher education, you know, we've had we've had some really good Some university guidance has been exemplary and I tend to you know blog about that and talk about how great it is But I do worry about the focus on individuals I worry about the focus on individual students because it's not that I don't think there are risks, you know Both intellectual and emotional. I do think there are potential risks, but I think the moral weight On students to be ethical to be, you know, have academic integrity this magic thing to understand The the nuanced differences In terms of what you may or may not do when actually the editors of major journals are struggling to understand that If you look at the debate of nature, so I think I resist the kind of ethical weight that's going on to individual students around us But I also think well, you know, it's great To have agency and to give agency to academics and obviously my work Is very much about disciplinary and cultural differences and how you know, we can't just assume we know what works for students because You know, there are a lot of differences just disciplinary I think it's great to give academics that agency But the question I started to ask was what do we have an environment in which we can behave ethically and safely? Do we have that environment in our universities? The models that we're being asked to engage with given the Limited options that there are as they become integrated into search They can be integrated into data platforms As you know, we're offered all kinds of intermediary apps that make our lives easier And we have no idea what models they're drawing on and actually the people who develop the apps The model can change underneath them and then and the functionality will change and they have no capacity to respond immediately to that so I would guess I was kind of pushing back against that narrative about what individuals should do Who are asking the university sector and again, not individual universities but the university sector collectively and going back to i'm sorry I don't know who you quoted, but can we go back to some of that? idealist Vision that we do things collectively in this space. We're going to be so much more powerful So I just started asking that question Could we collectively as a sector joining with other sectors, you know heritage has culture all kinds of common interests here Anyone who has access basically to culturally valued knowledge has a common interest in thinking about what kind of an ecosystem is possible You know, there's been a huge brain drain from universities to these massive commercial people when it comes to AI but we still have you know public projects we still have Commitment we have like the open education sector, which is incredibly committed to doing things publicly in a shared way so I guess I was just being a bit of a devil's advocate and saying well What would it look like we had more of an open public commons for developing our own models our own models around academic content around Research values, you know, but they're in a shared way because I have absolutely no doubt that the rich universities are doing this I've got no doubt at all that the well-endowed research institutes are doing it But that's just creating more barriers and more differences in the expertise. So, you know What would be the shared values and the shared technologies we could Organize around if we wanted really to do this in a more open way Wow, okay. Well, that's that's a fantastic answer to my question And I and you've taken me ahead to a few different places where I'd really like to go Let me Well, by the way, if you haven't had a chance to read this yet Take a look I'd say about four screens down Helen's article. There's a amazing graphic from all world and data describing who Where people work when they work on artificial intelligence systems And it shows academics playing a leading role until about 2018 and then just falling falling off a truck And it's really entirely almost entirely industry right now Um, it's uh, it's it's very very powerful work. Let me Turn to um the forum community Let me ask I have more questions. I'd like to press on but uh, but let me ask What would you like to ask our guest? Um along these questions of everything from democracy Platformization content writing all these around AI What questions would you like to ask and again if you're new to the forum On the bottom of the screen those two buttons the raised hand you can click and that just brings you up on stage So you can be face-to-face with our guest Or click the question mark. So you can type in your question or your comment So that you can really give us a sense of um, you know, that's often what people like to do first um In the in the chat we have a conversation going a few different directions Uh, mark hobert wilson is recommending john dewey. Um, which is a perpetual favorite. I think among us Ed Webb your former neighbor helen, um mentions or he advises us that collected action is needed but hard Not only given the precarity of somebody in the sector But also because the urgency of other challenges such as political attacks under funding, uh, etc Which is uh, which is a serious a serious issue Uh Nick, uh, nick baker says that this is exactly what i've been arguing for here in canada And I feel we're seeing a lack of engagement at the sector level We're advocating responsibility as a sector and then throwing barbs in an industry when they create things that don't fit our values um Well while people are thinking and I can I can imagine smoke coming out of people's ears helen Um, let me come back to one of your observations and wonder if it's a place to start from um Some of the most wealthy well-resourced institutions Have tremendous computing capability Um, I'm thinking for example for example of my alma mater university of michigan, which has a huge computing enterprise and they in fact I'm not sure exactly how this works I believe they have either an instance of chat gpt running locally or they've tokenized it's or they have access to it But basically they they allow anybody within the university machine community to run To access chat gpt locally. So it It's constrained that way Do you think it's possible that we could take great well-resourced universities say? Cambridge Say, you know cnl's poll in in paris Some of extantford in the u.s. And have them Put their cut their ducks in a row and create something either a position statement or a recommendation would that help? Thanks, brine. Yeah, I think I totally agree with some of the comments in the chat that well This is also utopian But the utopian in the sense that I think you have to create those statements those values And one of the good things about what's happening is it's pushing universities as a sector, isn't it to actually articulate You know what we think learning is what we think knowledge is and maybe Sometimes in relation to open projects So I'm not I don't want to spend the whole conversation on what the kind of open Ecosystem or architecture would look like because that's absolutely not my area of expertise But I do think open projects don't happen without a statement of value. You know if you look at something like Wikipedia Although it is as an entirely inclusive project It functions as a community because of very clear values Has amazingly clear values around genitive AI. I mean, they're not being followed But the values I think are absolutely exemplary Sometimes the value is what you need to orientate people around So, I mean, I think what I'm calling for is the articulation of those values that rather than Identifying specific institutions to lead it or indeed specific technologies to lead it. Well, that's great No, no, but I think I just want to also pick up For me, you know, I'm much more comfortable talking about this as a political project than a technical one And I think those two comments that, you know, political attacks and underfunding I would just like to link those two things to the actual technologies of generative models I think generative models are part and parcel of insisting that there is one way To genuinely know the truth and that is to build a huge pile of data And go looking for patterns in it. And, you know, that is one way to find do some very interesting research projects But if you go to the sciences that have depended most on modeling, you know, things like climate science But also, you know, the science of protein folding the site of identifying new antibiotics Those models have been built consensually over decades where scientists have realized the value of sharing their empirical data from empirical research And when they get data from model, they go back to the real world and they test it out And there's this sort of conflicts about, you know, the relationship between the models and the real world But, you know, we constantly held up with these examples as they prove that AI can solve the world's problems And then there are very, very specific projects in which the human knowledge, the real world experimentation and the model Have a clear relationship that's been established over many years And we're kind of being told now that piling up data and looking for patterns in it Is the only way to know anything and this is part and parcel of the attack on humanities and social sciences I think and the ways that we know the world which are, you know, too largely deliberative and qualitative And involve people making meanings for themselves about their lives in the world So I think Deeply tied up with it and I think underfunding is deeply tied up with it I think the other part of this agenda, which is much more on the surface Is making data, making knowledge work precarious Making it easy for less skilled people to do restructuring it Upping the hyper productivity upping the speed So, you know, we all love a new tool and it makes us work better. That's lovely But our boss loves it because it makes us work faster and there can be fewer jobs to do the same job, you know And that should be Really obvious, you know to academics in the situations we find ourselves I think I think that last part is is definitely widely felt. Um The um, oh gosh, I have so many questions Let me let me get out of please. Let me let me step aside and uh, and and welcome some, uh, some questions here Uh, we have one from, uh, jared s Um, who's at uh, studiocity? Um, and he asks this Do you have strategies or tactics to manage de-platformization as tertiary institutions themselves lean into platforms to scale active education? Where is the balance here? Yeah, that's a great question It's kind of beyond my pay grade because I'm, you know, very peripheral in the situations I work in But I have noticed in the chat window all kinds of expertise, uh, that we might want to draw on So if somebody else is in the chat window wants to pick up the question of de-platformization Or of, you know, I mean, I think I'm much more familiar with individuals working in in open source Context and having to struggle to make that both safe and ethical and possible within their environments But you know, I'd really welcome someone else who has that expertise to answer your question or to engage with you No, thank you. Thank you for saying that um, and uh, jared, that's a great question Uh, and in fact, I'll uh, I'll just quickly post the question in the chat for those who Didn't get to see it. I don't want to be able to have it Um, we have another question coming in from katharine krunin Who asks this Collective action is so important and so challenging to achieve as how one knows so well Can you share any past examples where collectivity for equity or change has worked well That we might learn from excuse me Good question gatharine That's an amazing question I mean katharine as you know katharine was a colleague of mine in the uh, editorial group for the feminist special issue of learning media and technology Thank you katharine for your as ever challenging question. I think I would create parallels with other kinds of agenda so I think um You know, there have been really powerful movements within our universities in recent years and I would Certainly identified decolonizing as the one in the uk that has Been most challenging that has created new kinds of solidarity also new kinds of challenge to the ways we normally do things It's been a very grassroots movement. It's come as much from students as it's come from staff And it's far from achieved its goals But I think what it's done is it's forced people to see the connections between knowledge The knowledge that's taught in the curriculum and the infrastructures of the university You know the historic infrastructures the present infrastructures the platforms The structures of careers the structures of power and I think if we want to kind of challenge the The new Maybe the new colonizing Influences of the big platforms we need movements like that So, you know, I'm I'm thinking really big here, but I do think we need to have to link up Some of the concerns and doubts as well as the excitement that students have about the value of their work when they use generative AI Some of the concerns and doubts students have about the future of their jobs If you know, they've been told AI is going to take all their jobs. So what am I doing here? And I think that can happen really locally. So I I Keynoted the um alt Association for learning technology with semitone AI recently And was introduced to so many brave projects going on in universities in the uk Where students are being asked to lead to to Be given free spaces where they're not being judged There's no moralizing and they can talk about their concerns talk about their What I certainly do with students is try to talk about writing practice in a really non-judgmental way Put it on the table, you know, how how are we all writing? Um, and I think so on that grand scale it's about linking up, you know All those different movements to create a real sense of shared Shared mission and I think locally it's about having those open spaces for conversation where everyone can feel heard And there can maybe be some powerful work from that Which doesn't always have to be negative. It can be it can be sharing opportunity as well Hmm That's a beautiful answer. Um, thank you. Thank you. Helen. Thank you Catherine for that question. Which is fantastic. If I could We had a guest a month ago. Um, the wonderful james, uh Shulman who is recommending what he called synthetic organizations So those were a third party organizations the examples he gave were Jay store art store and I would also wonder about that uniquely british entity of jesk Would I mean would these be good anchors for that kind of? a collective work Yeah, and actually, you know in the uk we have this great history of publicly funded projects in ed tech, which I think it's you know It's hard for us on the inside to realize how unique that is and it's hard for people from the outside to realize what a possibly Naive and idealistic view of the world that gives some of us But yeah, I mean, I thought it was really interesting how jisk was able for example to negotiate An opt out for universities in the uk from the new turn it in i a Detection and i'm not singling out one kind of company I mean in many providers also came forward and said yeah, we can we can detect ai Um, a lot of universities felt uncomfortable that that hadn't been properly tested And you know in the uk with a shared representation We were able to negotiate an opt out which then other parts of the world were able to buy into so, you know Just on that level very recently. I think there's examples where um Given that We if we want to be in this game, even if we want to develop our own laptop based open model You know, we have to lean on these core models. We have some relationship with them. That's unavoidable But having a shared voice Makes us able to have a more powerful relationship with them Then if as individual ivory towers, we all try and build our own thing and become incredibly vulnerable To take over or to forms of partnership in which we're really disempowered We're just being sold stuff, you know, which is what will happen You know unless resourced parts of the world of the university Does that answer your question? That does a great job of answering my question. Um, thank you And in in the in the chat, we've had a few responses. Nathan kelver has recommended Open data as well as open source tools Really calling out for a hugging face and Doug billshaw has a great comment. He has another question too. So this guy's on fire, which is remarkable since I think he's Standing in the bleachers In your dark Well, if I let me bring up let me bring up his question here. This is a really good one I'm interested in questions around equity for students. I'm studying for MSc at the moment and in yesterday's tutorial Someone's were talking about training GPT is that others had heard of open AI So we have an inequality across student knowledge and awareness Absolutely and actually that inequality goes really deep. So If you try I know that search is having its own struggles at the minute But if you try searching for anything around writing an AI, which I do a lot because I'm interested in them You know, your search screen is full of thousands and thousands of offers to student promises to students. I mean I talk about this Um, we could talk a bit about how they are framing to students the job of being a student Which I think is really troubling But that's just stick with the equity issue So, you know, if you pay you can get a better quality version of chat GPT Or you can get an add-on which will quote humanize your chat GPT facts And a really interesting study in the UK was done quite recently by the Institute of Student Employers, which is, you know, um fairly fairly mainstream body and they looked at how Their graduate applicants were performing on various standard tasks And they found that those that were paying for GPT for Were outperforming the students that had access only to kind of unpaid free versions by by factory of store and they decided they decided that um What that meant for an from an equity point of view from an equity point of view is that the recruitment processes Had to be in person. They had to be task based team based They had to be able to see students performing in contexts where there was no advantage to having these extra paid for models Now that's really interesting because I don't know about in other countries But in the UK does it's strong narratives that We're letting students down if we don't prepare them with AI or an AI works at least and yet then we'll have employers saying And and I think this will be very obvious within a few months We're not interested in whether you can use that paid for service that is promising to help you pass your assignment We will train you on our bespoke closed model if we need to we're interested in whether you can think for yourself And we're interested in a level playing field in terms of access to some of these resources Now that's a really powerful message from employers to universities about how to Support students, you know, even if we buy into this narrative that there will be no job That's not an AI job in six months time That doesn't mean that students prepare for those jobs by using AI in every assignment I would I would If you if you I know this is this is awful to say but if you get a chance if you want to do a post just about those Those ads and those and those offerings. I think that would be a lot of people would find that valuable Because I've I've been looking at those ads myself and getting more and more disturbed But I haven't I haven't put them all together and that might be very very useful to see In the in the chat Nathan kelber asks or I'm sorry he imagines I think we will see poorer students will be forced into lower tier versions of big tech models possibly ad driven Does that does that sound likely or? Yeah, I think that that's really what we're seeing happening, isn't it and um Sorry, my eye was caught by dugs. Dugs. You're being very provocative today, which is Exactly what I'd expect from you. I'm going to think a moment about Doug's provocation about calculators But I should could I say a little bit about that that trembling messaging Brian because I don't know if you've been troubled by the same things that I have so I think there's a powerful it is powerful that Use of these technologies is being driven by student use. So of course students are using it It's it's kind of inevitable An obvious and we shouldn't be too moralizing about it But what these ads are presenting to students is not For the most part a better way of learning, you know, it's not Hey, there's information online that that can boost your chances of success. What what they're selling to students is this idea that This technology will help you to pass this kind of chewing test that your teachers are setting for And they have some technology to detect whether you're using our tools But our tools are better And the whole trick of being a student is the pasties during You know, you're a good student. You're smart if you use the right technology So you get past these slightly dim and outwitted and outdated Teachers who are setting these stupid tasks and it's an imposition on your time to actually try and write anything for yourself Now, I don't think you know, I mean what I say is I think there's an interesting narrative around grading and credentializing Hidden in there, which is unfortunately a little bit true But definitely these narratives are pushing students along an axis of cynicism and and you know Even despair about the purpose of doing an assignment in any other way Let's give me some thinking time about Doug's character question Oh, that's that's really really powerfully said John Warner has been writing a lot about this Yeah, he teaches he's written about teaching and writing and teaches writing and about the the gap between Not just the technology, but also the Habitus the situation of writing and I think Your comments about the the cynicism that produces is very very powerful And Doug has probably more questions. So we don't want to I don't want to hold him back But I also don't want to hold back any of the rest of you You can see Helen is is ready to pounce on your questions And to help think through so again If you'd like, please just look in the bottom strip on the bottom of the screen and click You know click that question mark button and type in your question Or if you want to join us on stage you do not have to have an elaborate bookshelf behind you in order to be allowed on stage I promise We have Our question our discussion in the chat has been bouncing around a few different directions The Graham Atwell asks an interesting one. Are the LMI companies seeing education as their biggest potential market? so Hi Graham LMI companies you mean the modeling companies That's a really interesting question. I'm When I thought about the um, you know, when you look at the adoption curve for some of this in businesses It seems to have dropped away not dropped away completely But the speed of adoption seems to have dropped away and a lot of businesses are holding back and waiting to see Whereas it feels as though the narrative in higher education has become almost unstoppable in terms of how we need to reframe the curriculum and assign lots and assessments And thinking about what knowledge is and thinking about How knowledge gets produced and that's going to be radically different So I do wonder whether higher education, particularly in education generally is a is a particular Target's a particular interest for the adoption of Can I just address the opportunity that came up I think it's important Of course, there are other equity issues with textbooks and you know, my goodness with the laptops people have I think that comes back to this question about what role universities have to level the playing field Which comes back again to the question about what kinds of models that might be safe that might be Ethically constructed that might accord with academic values That might be proof from deep fakes and other kinds of awful imagery that students might be exposed to What kinds of responsibilities university have to to do that work? when we know that Only for the purpose of gaining market advantage These whose public models were released before they were really safe and certainly reliable and certainly robust So yes, of course equity is never solved by You know taking technology away. How could it be? But I think when it's such an important technology, we do need to be thinking You know quite hard about what is our path to creating a more equitable environment for students to engage with that technology Thank you. Thank you for for grabbing that and and and for responding Kim in in the chat, do you mean a return to orality or oral literacy? We have a really good question from Sukuna, I'm going to try to pronounce this correctly. So please let me know how badly I do Sukuna, Walgie University of Cape Town And Sukuna asks many teachers have now to grapple with how to respond to general AI Including the pressure to make their students gen i ready for the future workplace. How can they resist or shape these narratives? Hi Sukuna. It's a it's a brilliant question I don't so okay one of the things I think about that is that you know, just that information that I've Get shared about how Businesses and employers are responding, you know is kind of critical That employers will always want students that can think of themselves that can express themselves whether that's in writing and creating a video in, you know developing a collaborative Made object They're always going to want students who can do that and universities a space where you can explore the different ways that you might do that In ways that are not so constrained by what an employer demands from you, you know at the next deadline So let's make them spaces for exploring all of that Um, you know exploring live creation, which doesn't have to be an exam environment So you can do you can actually, you know do live writing we can do live production Um, where the value is being in the space with the people, but it doesn't have to be assessed Presentation performance and so on but I also think that there's um The limit to what an individual academic can do and I really want to Encourage that cross disciplinary conversation That was when I started talking about academics in different disciplines about Criticality, I realized there were so many resources that none of us were seeing And I had this immense privilege. So, you know, the resources for thinking critically in engineering Are profound the resources for thinking critically in media studies and you know in professional subjects like law But we rarely get the cross fertilization and I think when we have some profound change such as technology is bringing and this new technology It's a great moment to Look for cross disciplinary conversations where everyone brings their best their best ideas and we we all recognize we don't have the whole answer Maybe this is something we have to really hurry up on as all the Adventure capital flows in and starts to discipline AI companies as the regulators start to firm up regulations and habits are being formed Um We have a great question from our our dear friend in texas tom haims who always asked a deep question Uh, and this is one that comes back to our earlier topic of open He asked isn't an open knowledge system the most equitable system Well, the parenthesis of course university technical models are based in hoarding knowledge all too often Tom did you want to did you want to speak to that before I have a go? Tom I can beam you on stage if you're in a good spot for that too I'll give him a second to uh to reply here He's in an unusual spot too. He's not in his typical. Oh, he can do great. I'll be him up right now because he's just An amazing guy Let's see Oh tom, where are you today? Oh, I'm in the fact. I'm in a faculty lounge But fortunately, I'm alone at the moment. So not a lot of background noise. It's not that I care whether anybody hears what I say It's just So, yeah, I mean the you know one of the things that I When we talk about inequitable or unequitable, uh Practices when it comes to knowledge, you know, usually it's about closing things off by Saying, okay, you can only participate in this if you have a degree and go through Our little checkmark thing to get you get your certification Or you can only do this if you can afford to buy the book or the subscription to the journal get past the paywalls, etc. Etc. You know, otherwise these things are closed off and that's not a very equitable system Um, AI is in danger of moving in that direction. You have open systems and closed systems Yes, if everybody has to pay 20 bucks a month for chat gpt That's not very equitable right or even worse, you know, may get more expensive Not less because does that 20 bucks actually reflect what their their costs are too? You know, although I heard they made a big revenue last year So but open systems on the other hand are collective enterprises I look at AI as a way of connecting information You know to me the AI is a connection builder that that's the difference between Google and and what we're looking at now is that google you had to know where you were going in order to find the knowledge It's like a library without a librarian Is what google is whereas AI at least has the promise of providing that librarian side of things right to make those connections um And if it's implied correctly if you don't hide the connections Now one of the problems I have with the way chat gpt works is it hides the connections. You don't know where it got the information You know what it's doing So if we were to surface that in an open system and have knowledge diagrams Visualizations all these sorts of things that makes it a lot more of an equitable knowledge experience um And it also does the other thing I've been thinking about lately is this whole plagiarism stuff and Let's use AI to discover plagiarism. Well, if I were to say take one of my articles or one of my books and say Here AI give me all of the connections all the connections Especially the ones that I forgot that I learned 20 years ago that I subliminally included in the book That wasn't a direct citation I read in grad school or some time in the in the interim And make me a map a heat map What do these things do or I take brian's book and I say what did all brian's ideas come from give me a map Not just the bibliography, but give me a map Right, where did he where did he think of this? Where is this somewhere else? Where can I explore these pathways? this not only does it Make it more equitable from the perspective of textual vision versus visual vision, which is actually more human text is bias towards certain types of thought patterns as well These are certain kinds of outcomes So to me, I think the real opportunity with AI is again this opens types of systems That show us the world rather than hide parts of the world, which is what a book books are all about conscious choices What's in what's out a written books have edited books, you know, what do we put in? What do we leave out? Those are all questions and those are judgment calls by the person the curator of the information but And that also introduces bias and equity questions So can I edit a book in the same way that someone from rhodesia could edit? Is it bob we could edit the book? Yeah, there we go Or rhodesia 1950 right, you know, how do we get these perspectives? How do we see things right in a different way? I think that's the real opportunity That's almost So okay, how do you see open in this context? That's my question So I'll come to open at the end But tom, I think there's a few things in what you said that I kind of want to push back out a bit And the first one is that, you know, so, okay If you have You have a back catalogue of books and I know people who are loving the way they can put their own work in It's kind of zero shot translation. You put your own work in you It's amazing for that. Yeah, and this is how, you know, these things have been used in the digital humanities for decades To examine corporate of text from, you know, well-known writers or from social data, whatever it is The thing is if you already have a well-established Writing practice or image-making practice, whatever it is if you have powerful conceptual frameworks, if you've developed over the course of your professional life Then these tools are of course going to be of use to you in a completely different way To somebody who is who we are trying to help Establish their conceptual frameworks, their writing practice, their image-making practices So I I think to begin with we've got to be very careful about offering to students things that improve our productivity as academics or professionals or whatever Because the reason we ask students to do things is not to produce the thing But to develop conceptual frameworks practices habits of thinking which, you know, these tools may May guide them towards but may not the evidence is very shaky on that at the moment So I think, you know, there's a kind of And then I think the other thing about well, okay, so, you know, you've made some editing choices. It's kind of there's a kind of If I can say a slightly spurious democracy and I'll give you an allergy I've forgotten the guy who wrote about it and I'll try and find his name, but you know, there's there's a guy in Data science pushing the idea that when scientists do um An experiment data project They they should just make the data available. They shouldn't provide their own interpretation on it at all Because that it's more democratic If you just pile up the data over there and then other chemistry, but actually what that then obscures Is exactly what you described as their conscious decision making that it's the data doesn't just arrive somebody framed a research question They went and collected the data They they interpreted the data They had a reason for choosing one bit of data rather than another so data doesn't just arrive And when you have somebody's name on the front of the book or the research paper That person is standing behind those decisions that they can be seen. They're visible They're not pretending that data is completely available to be reinterpreted. However, anybody wants it to be I agree with all that. Yeah, I don't I don't think that really goes against what I was Trying to say there in the sense that You know again the idea behind open ai and my eyes if it's done right is that it is literally that that you can see That these chat this chain this map was created Through the collective work or the individual work of this person That but it's just a different way of seeing things and then you would still need teachers to help students navigate these maps Don't get me wrong. I'm not saying. Oh, let's just throw it out and hand everybody a stack of maps No, you still need but the idea is that you can more easily see Imagine a concept map to my favorite example that I've been using a lot lately Imagine a concept map of us history, you know one of the big fights we get about us history here Is which story are we telling? Are we telling the white man's story? Are we telling women's stories? Are we telling this the former slaves or the slaves stories? Are we you know, which the indigenous people's stories? Which stories are we telling when we relate american history? I think we need to tell them all and we need to show people how they relate to one another because they all have a certain weight and validity to them And the the politicians jump up and down because they say you're placing my story with this other story And that's that's we need to work. We need to fight that because it's not about replacing It's about seeing the world in a much broader sense Then we were taught 30 years ago or 40 years ago And the same thing goes in britain with colonial history and so on it You could do the same sort of idea, right? But that's what I see ai and it's really you could do this now as a But it would be incredibly labor intensive to pull some of this together to me ai would accelerate that and allow us to Throw up together a bunch of different viewpoints, you know as a photographer It's about I look at things in different ways and that's a new tool to help me look at information and knowledge in different ways Then simply reading the canon which frankly most of our students don't have time for I'm uh, I'm right now. Helena. I'm doing one of tom's Incredibly cruel exercises For photography. He has me photographing one object 36 times and I'm I'm just going nuts trying to Of course But I you know if it comes comes to for example, british colonial history I know for I know for sure that in that in the data record that we'd be drawing upon if we go to open it if we go to Gtp for To look for it. I know that the records from the the english white perspective massively outweigh the records as a collective experience So, you know, but even if that weren't the case Those different experiences are not random statistical features I would rather hear verbatim the experience the embodied experience of groups of people that can identify why their experience was the way it was And come to send that as a viewpoint as a coherent viewpoint not as a statistical Artifact but as a coherent viewpoint of their lived experience and that's one of the things I think that these Apparently Viewpoints on the world actually don't provide because there is no shared community or body standing behind Viewpoint and that's what we need to connect with surely I want a map that shows me what parts are missing too though You know these narratives that we we need more narratives here Maybe that's an but that also opens up research opportunities because it much more easily surfaced as a there's a gaping hole Here somebody needs to go and see what they can spend a lot of time trying to surface stuff That we may overlook because we don't have time I mean there's the problem we have right now is there's too much information And not enough time to connect it properly and I that's where i'm looking to ai to help because You don't have time to read we Who's the last uh, who is it? Uh, it was uh, lifeness was supposedly the last man who knew everything, right? I Would say that the idea we have to read everything, you know, even the idea that we have to read This isn't replace reading. Yeah, we don't need to read everything though And we don't even need to you know, we can be in our own little patch And we can be thoroughly own that patch and understand why we're interested in it But I want to see the connections between what i'm my patch and what everybody else's patches are too Tom I have to Because we're almost out of time Thank you people as usual. Thank you. Thank you We have a quick question for everyone in the chat. Um, this chat is is fantastic Would anybody object if I Shared the chat transcript. I likely edited to my blog post. Uh, just just let me know chatter is in in the chat Helen we have uh, we have a question from from doug belcher To our mutual lack of surprise And this is a fantastic question. I thought it'd be a really really good one to end on And we just bring this one up I'm worried about asking too many questions but having brian held on stage as a treat I'm thinking about the film her and the role of emotion in our relations of ai Is that a threat to higher education? I just want to toss in too that chat bots as characters is something that we don't pay a lot of attention to Everything from replica to character dot ai Here I'll put this on the screen again. So you can say you can see that What do you think about the role of emotions in our relations of ai and is that a threat to higher education? Well, I'm struggling to connect the role of emotions part with the threat part Dirk, but I'm sure you've got some thoughts about that But I mean, I think I'm actually my next post and this is this is a great opportunity I'm terrible at this but to trail it my next post is called god slaves and play things And it's about how you know generative ai is both Designed to make us feel things, you know make us feel we have to respond in certain ways And also we are designed to make us feel things, you know, and so there's a kind of um Both very powerful, but in some ways a quite fetishistic relationship going on here And I think um Katie Conrad had a great blog post a few months back when she talked when she said don't let's you know Don't be fooled again, and we can't not be fooled You know, we would be inhuman if we didn't want to respond in some way to something that appears to be talking to us and to appears to reference human emotion So I think um, we can't take the emotion out of our relationship with ai But that's in us, you know, just like we can't take the meaningfulness out, but that's in us It's not the thing that's been made It's in our reactions and our reactions are amazing and incredible and perhaps can teach us something about ourselves But I don't think they teach us very much about what's going on in the probabilistic stochastic modeling of the language or the text or the images Is that a threat? I mean, I think we can already see there are um, there are well-being issues with that For sure there may also be, you know, really great things about it, but that will depend on individual students like like, you know um One of the interesting things that came out of my research, which I will finish on is that I started out thinking I was going to be talking to intellectuals about criticality as an intellectual exercise And it was 2022. So I was talking to people in the aftermath of the online pivot in the covid pandemic But the thing that I ended up talking about with almost every one of my teachers Was how you create a relational space in which it's possible to be critical in which it feels safe Safe enough to move from where you feel comfortable to to take another point of view to widen your frame of operations And I also I think You know, we have to we bring emotions into learning all the time We bring emotions into our relations with technology all the time. That's because we are amazing It's not because the technology is amazing That's beautiful beautiful moment and I I I have to wrap this up with great regret Helen because you are magnificent. It's a wonderful talking with you Um, I I'm I'm so so glad that we've had a chance to to host you Um, I sure the link to your sub stack But is what's the best way to keep up with you now? Should we follow your sub stack and then when you move we'll be able to follow that information there? Yeah, it's been a blast brian. I'm so grateful to you and everyone Yeah, I'm afraid I'm moving from sub stack like so many other people But if you follow me there, I'm going to try and make that as seamless as possible for everyone who wants updates Thank you. And that's again the bottom left corner of the screen. Um, you should see a link to that right now Well, thank you so much. Um, it's been a real pleasure. Our minds are buzzing Um, please have a good rest of your evening. Um, and and and we will circle back because we're going to need to have you back when your new book is ready Teaching critical subjects indeed looking forward to Bye. Bye. But don't go away yet friends. Um, I Wanted to just thank you all this. This has been a tremendous conversation Uh, I think we've gone in so many great directions. Uh, I It's just as always an honor to do this with you all If you want to keep talking about this on the socials, uh, please use the hashtag And here you can find me on twitter, rest it on threads or blue sky, uh, or my blog If you want to go back into our previous sessions about ai as well as about writing or about teaching in general Just go to tinyurl.com slash fdf archive If you want to join our upcoming sessions on which will be on ai Just go to the future transform website forum that future of education that us And once again, thank you everybody for thinking together with us. What a great conversation Uh, I hope everybody's well in this new year 2024. We'll see you next time online. Be safe everyone Bye. Bye