 Our next speaker, Ayechi Nakata from University of Reading. Okay, well, thank you very much. Before I start, a couple of things, just manager expectations. Unlike all these fascinating talks that people are giving regarding the project and so forth, my talk doesn't involve any project as such, but instead what I'm trying to do is to try to present a perspective called organizational semantics, which are used in my day-to-day life, to try to make sense of things. So my background, although in this network, my background is in engineering, nuclear engineering, and then later on I did PhD in Artificial Intelligence back in the 1990s, so eight years ago, and I worked as a computer scientist for a while before I joined the business school. So I don't know what I'm doing in the business school, but it shows that the unmaking sense of technology is a key part of our everyday life. And I think for the natural environment community, I think we had some talks around the development of communities and the development of people. I think that element needs to be considered in a broader context. So I hope this will be not uninteresting to this time, but I'll talk about aligning people, problems in technology from the organizational semantics perspective. Now if you know what organizational semantics is, then you can leave the room because we'll probably get bored by talk, but if you are very interested, what I'm trying to focus on is, again, sense-making. And since I will cover, it includes problems with simple disaligning. Essentially, this is not aligning the design intentions and the actual technology implementation, which might potentially cause problems. And also looking at some of the concepts around destructive technologies so that we can make sense of why destructive technology happens and how that's introduced and how they're resolved. And I'll do all of these from a sort of organizational semantics point of view, which is a type of based on semantics, which is a study of science, and try to apply that in a very broad sense to the understanding of organizations and involvement of technology in organizations. So I'll talk about using some techniques from the organizational semantics to address some of these issues. In this community, we haven't been talking about it, but in all the sense-impositions in books I've been attending in the last couple of months, everyone's talking about ChatGPT. So this has been quite disruptive in a way that the people are now starting to make use of them, very useful, but also starting to get worried about it. So just to show what ChatGPT is capable of, I just queried ChatGPT about why the talk should be on the environment of sustainability. And this is why they come up with it. And, you know, you can criticize it, you can see, you know, it's not something in that sense, but generally speaking, that kind of captures lots of things that we've been talking about, and people have been discussing in the environmental sustainability discourse. So that includes not just the technology aspect of it, but also the regulatory and ethical issues that are involved. So this is a kind of technology that we are starting to deal with, and especially in a business discourse and societal discourse, some people start to get worried that someday this AI kind of technologies might take over some of our activities. So without being alarmist, there's not necessarily a thing that's aligned to some of these thinking, but there are some issues which are being raised. For example, there's a book called Alignment Problem, which basically outlines that some of the problems caused by misalignment between what the AI is trying to achieve and what humans are trying to achieve. So AI systems are developed to support humans, but sometimes it's in a way that it basically has its own agenda that will satisfy. And that goes to misalignment. There's a chewing trap by Wren Yolson in Southford who basically says that here as machines become tougher than humans, basically humans become more like a subservient to machines. So all the intelligent work done by machines and humans are brought down to the place where we're basically serving the machines. And at the same time, some of the large corporations which basically control these technologies will basically be in a position to fix that, a value position, so which probably will put us in a trap whereby we can never get out of this situation. So there are concerns like that. And Wren Yolson basically argues that we should be talking about augmentation using AI rather than automation, which might force the problem. And of course, in a popular media, some people say, I'm going to ask you a question that is built into extension. So some of the, in the circles, God's Father's AI have been coming out to all people about something that might may or may not work. So, because I've been asked to comment on this a few times, I basically said, OK, so what do people say about the AI risk? So I went to this place called Safe AI, which basically outlined eight examples of societal risks that are currently caused by AI. And as you can see, there are things like weaponisation, misinformation, proxy gaming. Proxy gaming simply means that if we don't have the right optimisation or right kind of a loss function that we implement in AI, machine learning, we might be actually optimising in a way that humans may not necessarily do. So that creates the design. And feeblements for humans value-looking, so value-be-locked in by large corporations and we use control there. And also emergent goals, which you never foresee when we implement AI systems. Deception, policy, and behaviour, this might end up damaging society. And if I look at this on a kind of risk management interview, and some of the risks are caused by some of the designs into it. So something that we are intentionally putting into that. But some of these are emergent. And quite a lot of them could be emergent risks, especially in terms of AI, apart from the weaponisation, which has to be really intentionally done. And then looking at my background, also, OK, so how does it compare to things like nuclear technology, that people sometimes compare with? Again, it's very difficult to compare nuclear technology and AI, because one software, often another one, is more of a very hard technology. But there are some aspects of design risk as well as emerging risks, also associated with nuclear technology as well. So if we kind of look at from the risk management point of view, some of the design risks are, say, intentional. They are often done by malicious actors, if they want to cause harm. They often use known mechanisms, so we know why they're doing it, how they're doing it. And as such, it's often manageable, controllable, if we have the right approach of regulations in place. And like or not, it's kind of aligned with design goals, in a way that even if it's malicious, it's aligned with design goals in terms of generating. On the other hand, you know, I think this starts trickier, because they're often unintentional. They plan for that. And that's the same malicious, they have good intentions, but it's causing kind of wrong effects. And therefore, some of the mechanisms are unknown and urgent, and therefore it's sometimes not easy to manage or control. And that's why misalignment in design goals often happens. So what I'm trying to do is try to make sense of all this, and whether we can actually try to manage that. And one of the ways to look at it is to treat AI, say, as one of the so-called adaptive technologies. And this is a definition for adaptive technology, for example, with innovation, that can fundamentally change and establish business models and understand these rules. So something that basically changes the way we think about things. And so an example I took from is the creative industry, because there's an imposing on our creative industries in the AI area. So there are some possible features around use of AI in creative industries. For example, AI assist in innovation or machine-small applies for activity, and so on and so forth. And also, so I've definitely been thinking about even many commands premium in some of the, even happening in some of the furniture, handmade furniture are often more expensive or have on premium compared to factory-automated ones. So these things happen in many of the industries. So what's really needed is try to make sense of all this, why this is not happening. And try to kind of realign the so-called business and society and technology at large. So for this purpose, I'm using all organizational semiotics. Now, semiotics is a study of science. So it's all about representation and interpretation and also try to assign meanings, but at the same time, I appreciate that there are multiple viewpoints because same science can have different meanings, according to what norms you apply and how you align these signs and options to them. And so what are those things that they look like? I suppose we're on a Stanford in the 70s and have been applied to analyze, say, organizations and information systems in general. So many issues around use of data and information in the actual business use is often caused by, again, misalignment of what is intended to be and what's actually formed as a system. So there are a couple of techniques I have time to introduce. One is called content analysis, one is called semiotic trend. So I just wrote quickly about how this sandwich can be used. So this is called one of the content models, which basically says that technical systems should be contained within a formal system, which is the more bureaucratic system, which is contained within the informal system, which is basically business or society. So the formal system captures the values and norms, formal system captures the processes and meanings and technical systems should be contained in the center. So if we have this container relationship, it's going to be happening in a very kind of coordinated and aligned way. Now, what happens with the software system is that it basically goes beyond the boundary with both the formal and informal systems. So that's why it seems disruptive to the our society norms and processes. So using this kind of analysis, or you can basically, to mentally make sense of it, is that there are three ways to do this. One is to make sense of systems. One is, essentially, contain it completely so that you say, okay, don't use that technology because you're disrupting the original software. Second approach is to, okay, just change the boundaries of the organizational and informal system so that we contain all that. We accept them in society. Of course, there's a third way of basically trying to manage all of us, all of us three, so that we maintain the container relationship. The third approach we are seeing is basically making sure that this alignment is maintained. So that's why people for some technology are basically aligned quite well with these kind of thinking. And essentially what this is trying to say is that when we design any technical system, we should be thinking it's basically a system of systems. It's not just design technology itself. And I assume that everyone will accept it, it will be embedded within the existing systems, but we have to design a social system, a formal system at the same time. Otherwise, we will break this container relationship and focus on it. So this is one of the ways of thinking as to ensure the alignment between these systems and to mitigate some emergent risks. Now, if I may, there's some examples where, again, this is from nuclear technology, but you can see that there's a value system at the very end where there may be key values and norms about safety, spirit and peace. And there's a regulatory framework that runs it, both internationally and nationally. And some of the technical system has its own principles where the fail-safe, the for-proof, there's a for-tolerance kind of idea embedded within technology. So this is quite mature kind of a container relationship. When it comes to AI, we're not really sure yet. So although there are points about technology and how it's going, but actually there's not a consensus on what is AI and what's not AI, and it's only design principles that should be following it. So there's still a bit behind in terms of the regulations, the formal framework part of it. There are EU-AI acts, but still in the very UK thinking about more innovative aspects of approach to regulation. But again, this is still under debate, so there's no consensus there. But some of the values and norms in society are starting to emerge. The innovation mechanisms of transformation, new opportunities, but also their consensus about ethical and social issues. So, again, we're in a kind of situation where these three systems are in a brink of resignment, but we say it's a public dialogue and it's a very standard way of forward design. We should be able to achieve this content. Just to introduce the semiotics part. One more. I'm not going to do this very quickly. So, something which I've been doing is to try to make sense of resilience. So because I'm from Japan and have been being a nuclear engineer, I was very concerned, of course, but interested in how Japan dealt with the Fukushima nuclear accident and how they dealt with it as a society. So, we had a workshop in Tokyo regarding certain types of community resilience. So, you can actually deal with this natural of man-made events and how can create a resilient society. So, for the sake of time, I'll skip a few slides, but one slide. But essentially, we'll try to make sense of, for me, is to, again, use some idea of some semiotics. So, semiotics about meanings and intentions and the society, societal impacts. So, we can use any technical system using these six levels, starting from the first level to empirics, to syntax, semantics, pragmatics and social. And if I apply this to social resilience, then physical world are basically something that relates the data and sense of reality, even instrumentation. And empirics is about assembling these components to meaning for engineering entities which are basically syntax or syntactic elements. So, what does that mean? What is this system actually trying to do? What's the meaning of that? And pragmatics is about what's intended to do. And sometimes this intention is not necessarily shared. And unless this is shared, the societal effects of resilience or some sort of capacity, resistance or adaptation may not work. So, what we did was use this kind of friend to talk about the resilience to some major incidents such as nuclear accidents. And we basically mapped it onto six levels, technical components, the processes of infrastructure but important, again, parties to understand the processes that go through on top of it. And to have social decision making processes where our communities are involved in making sense and understanding so that the societal response will be managed to achieve resilience. So, what, again, in summary what I wanted to do was to introduce the semi-autistic thinking to anyone who's dealing with the technological systems, especially with data and information, so that the engineers, like myself, will not go out to simply design things without really not knowing its unintended consequences of being accepted by society or communities, but at the same time achieve the societal goals that we intend to achieve. Thank you. Thank you very much. I'm trying myself to sort of make the connection between what we heard from Alison earlier and the frameworks, conceptual frameworks and what we heard from Scott and I suppose my question to you is how do you think this sort of way of thinking would impact on the day-to-day work that people like Scott and the Assuring Institute are doing? Yeah, so I was actually questioning that myself in the last two days. Now, how does actually impact me in that kind of way? That's the environment for me to sort of say. And maybe at the moment there are more benefits in using AI and technology to basically automate some of the data cleaning activities and some of the robotic processes that humans are carrying out. But if we start to depend on more and more AI and then especially it goes into more of the business processes. For example, some of the automated solutions may be embedded in business decision making. Then the assignments may start to emerge. So maybe for this community maybe still early days, but once this technology matures and then those dependent AI become more significant, then we might have to start thinking in this kind of way, similar to some of the other industries which are ahead of some societal impacts in terms of this. Interesting. We only have time for like one or two minutes, so it would make it really brief, but one question. Okay. Thank you. I have seen a few indigenous communities in Nigeria and there are some similar issues like it was also for companies that came and some parts of their security areas. Some parts of the community the work that associates in that kind of scenario what do you suggest the communities to do? Yeah, I think that kind of situation is where, you know, new experts and there are some people who are experts in community participation in decision making design co-design in that sense of systems. I think the main issue again is quite typical technology systems where the end users, so people who are supposed to be benefiting the technology are not benefiting from it, other because they're designed for their purpose or because they don't understand why that's done that way. So again it goes to me, it goes back to sense making and the effort that's been made in order to engage the various kind of stakeholders in design systems. Yeah, that's what I was going to say. Thank you very much that draws theme 4 to a close and I believe it's spotlight.