 Okay. Good morning, everyone. Welcome to the research track here at the OpenSim Community Conference of 2003. I would like to introduce our speaker today, Austin Tate. Austin Tate is Director of the Artificial Intelligence Applications Institute and holds the personal chair of Knowledge-Faced Systems at the University of Edinburgh. He is coordinator for the Virtual University of Edinburgh and runs the OpenView OpenSimulator grid. I, Austin, is Professor Tate's avatar representing the Virtual University of Edinburgh. This presentation explores support in a training-orientated eye zone augmented by Intelligent Systems Technology with the aim of providing a virtual space for intelligent scenario-based learning. I has provided online resources to accompany the presentation and to assist if there are any technical problems in seeing his slides. They are accessible via http-ashtate.org-oscc13. I will also put that down in text. If you have any questions during the presentation, I am them to me. I will speak them in the stream to Austin and he will answer. Okay. So over to you, Austin. Thank you, Shirley. And thanks for joining us in this session. As Shirley explained, there is a URL available for resources to do with this talk. That will be persistently available after the session as well. So for those of you watching on a recorded stream, you can also use the URL that Shirley just mentioned. It will be on the next slide too for those who want to see that. There is a little poster to the right-hand side of the main screen showing and for those of you in-world, if you click that, it will take you to the URL. So just to give a little bit of background, I decided to take a little bit of time out and go back to studying myself. So I became a student last year. I did the MSc in e-learning delivered via distance education methods at the University of Edinburgh in our own School of Education. I did that really to improve my own experience of distance education and methods for distance education because I am the coordinator of that for our own School of Informatics. But this dissertation gave me an opportunity to look through a number of things that have been interesting me over the years in my own work, which is mostly in artificial intelligence. And I'm going to give you a bit of a context of what we're working on and why and the application areas in this talk. The talk itself principally follows the work I did in my MSc dissertation. And this let me organize a number of threads and put together a number of resources which we're now using with our own PhD and MSc students at Edinburgh. So it's partly an exercise in collecting materials and partly an exercise in trying to do a specific project within this space of interest to me. You'll see why the title is there, this Activity in Context. And I'll explain what I mean by that as we go along. But officially this see this as a report on my own MSc work in the dissertation for that. The URL is on there, http colon slash slash atate.org slash oscc13. As I said, I'll leave that persistently available as a short URL to get access to the materials. It also gives access to the dissertation itself. I'll remind you of that at the end of the talk. So let's make a start. My own area of interest is really in mixed initiative approaches to education. That's where the tutor and the student work together, each taking initiative at appropriate times. I want to see that supported by intelligent systems in all sorts of tutorial modes. And some of the particular areas I've been working in are not in higher education. They're in training for virtual for emergency responders and other people who are involved in professional training situations. So by mixed initiative, what I mean is the various agents can take the lead on the initiative in interaction at appropriate times. So this is in contrast to tutor guided learning or student discovery based learning. It's intended to be that rich mix that lets people work together in a learning environment. So I'm interested in our scenario based training and learning works. What's the most effective way to support learners in some of these professional learning contexts? My research work has mostly been funded from US sources working in a range of areas with civilian and military emergency responders, especially where they're working together, civilian and military teams. Now the slide I'm going to show you here is what I'm going to spend a little bit of time on. If you can't see the slide, the resource link I gave you lets you bring up a copy and you can follow along in a browser outside of the window if you want to. But I will read out what the key elements of what this flow diagram is on it. So it's a flow diagram of areas I explored during my thesis work. And I'm just going to bring up an overlay to give you an idea where the areas marked in red are the parts I'm mostly going to discuss in this talk and the ones that were of most interest to me from an AI perspective, in the artificial intelligence perspective, in the work that I was doing. So what I did was really look at what I wanted to have a driver for this, which was the need for more effective ways of supporting community-oriented training sessions. We've got a community of people who are meant to be exploring a space for something they're going to do either professionally or in their work lives and we're trying to help them through scenarios explore this space and understand how some of the procedures work in the space. I'm just bringing up my own copy of that. So what I was looking at was some of the cognitive psychology routes that underpins situated learning, social learning and learning in areas where you've got a rich scenario, a rich environment for... I'll thank Beth for bringing that link up there. So some of these cognitive psychology routes I was exploring. Now this work has been going on for quite some time. Of course, there's a lot to do with situated learning, social learning, discovery-based learning. And I actually had studied some of this 40 years ago when I did my own undergraduate degree where I did a little bit of education psychology work in that. For as many years, even though I would do work in artificial intelligence, it was many years since I really had read more up-to-date readings and more up-to-date texts on this area. So this gave me an opportunity to come up to speed on that. It helps my own students now with being able to interact with them. But it let me come up to date with some of the terminology in this area and in particular to start to see that people have been using all sorts of interesting observations of how you can explore joint activity. And activity fits closely to my own area of artificial intelligence. And I'm going to draw that out in a moment. But there's quite interesting work on how the world itself in a training environment or a learning environment can constrain what you can do. Knowledge in the world or affordances, which allow you to constrain what's possible in the learning environment that you're working in. So I was exploring that. So what I was trying to do was understand how people model learning objectives and how they use those in designing their educational programs and their training programs. How you can use community knowledge in a social learning context and how you can use a model of the world scenario, the world state of the scenario, to give you a constrained set of choices so that the learning is directed in some way. What I wanted to do was try to show that there could be an underlying representation of those that you could actually do some reasoning about while you were doing this kind of joint learning to start to get the idea that intelligence systems could be brought to bear in this learning environment. And in particular, because of my own background interests and research interests in AI planning and plan representations in particular and shared plan representations for human and system agents. I wanted to see that many of those objectives, community knowledge and state information could be represented using the sorts of plan representations that we're familiar with in artificial intelligence. So that's the first of these red marked the shaded box. The representation of agents, plans, activity and state was a requirement that was driven by the psychological background that let me explore what I was already doing in a more technical sense with AI plan representations, but being able to draw that back and link it back to terminology now in use in the educational cognitive psychology area. What I then wanted to do was show that you could bring that to bear on designing scenarios for training in this mixed initiative fashion. And for that, I use roadmaps and I'm going to come back to a slide or two on these to explain a little bit more detail. So I wanted to use roadmaps that let you map learning objectives to possible ways you could constrain the world situation in order to give people challenges within that constrained world to give them a learning environment where they could explore, use procedures, use their background training and explore an environment where they had to make choices inside the training area, inside the training facility or inside the training tutorial room. So I wanted to have learner activities where we wanted to drive and events which were occurring that were giving them some sort of constrained set of choices. And this became the core of the dissertation work, this making of choices in a constrained fashion, hence the term activity in context. I wanted the learners to discover the activities they could take in the context they were in and the constrained context that we put them in, through injecting appropriate events into that domain. And this is, as you'll see, a typical way that learning and training groups do work when they're training people for these kind of professional environments. But then the back end of this and we'll come on to this towards the end of the talk is that I wanted to then experiment with a facility inside a virtual world, inside OpenSim, that allowed us to create one of these training environments, which was more like an operations center for people to take decisions in an emergency response context. And we'll come on to that shortly. So I'm going to just move off that flowchart slide now and move on. So mixed initiative training is what really is the focus of the work. And I wanted to bring a number of threads together, as I said, of work that I've been doing over the years. And this gave me an opportunity to sit down, explore some of this and do quite a lot of background reading that I've not really had the chance to do before I concentrated on doing this dissertation. I wanted to study, as I said, the cognitive psychological foundation for situated social learning. I wanted to identify effective learning methods relevant to the mixed initiative interaction between agents that interest in me and to discover the relationship between those cognitive psychological activity models. And the more research AI research oriented conceptual models activity that I've been working on, especially in some of the standards activities for representing plans and processes. And then to look at a methodology for how you might use these to design training scenarios and training environments, where you could use some of this background to give a more effective training oriented zone of work. And we call this an eye zone. And you'll see that that's this links to previous work I've done on things called eye rooms, which are intelligent rooms for interaction. So we're trying to create this virtual space for intelligent scenario based interaction that I call an eye zone for the purpose of this of this dissertation work. And what I wanted to do, as I mentioned at the start of the talk, was create document and demonstrate a resource base for experimentation potential reuse in this area. So that other students, other people could pick up the resources. And the URL we've given you before, just point to the places where a lot of the resources and demonstrations and videos are available. And the relevant psychological relevant educational psychology itself, I aren't really covering in this talk at all. If you're interested in this, the dissertation itself has a couple of chapters, which do go into this in a bit more detail and draw out why certain themes are interesting to me. But it is to do with communities, it's to do with action and changing communities and how you can use the power of a scenario or a story inside a scenario to give really good motivation to learners and give them an understandable and constrained environment with which they can learn. And then I just wanted to touch on the fact that there has been, of course, a lot of artificial intelligence in educational learning systems to date, indeed going right back to the beginnings of AI. There were attempts to create intelligent learning environments using artificial intelligence technology. And again, the thesis goes into details of this in a chapter. And it gives kind of summaries and literature reviews that give an overview of why this stuff's important. And now the last 20 years of work in this area has updated what was in some of the early textbooks of the 1970s. So I would just point you at the dissertation. It's a PDF of the dissertation online, if you're interested in that aspect yourself. But I wanted to use AI-inspired models of activity. And in particular, we use an ontology we call INCA to underpin all of the representations we have of plans, activities, processes, agent capabilities, and agent interactions. And this ontology is a very simple one. And it underpins some of the standards available now, available through National Institute Standard Technology in the US, for instance, and things which have become international standards. There's a core ontology of some of those international standards, which is itself inspired by some of the work that's gone on in a range of communities, but including the AI community. And I've been involved in some of the activities like the MIT Process Handbook, the Process Interchange Format that's most part of that. And then the later work where all those groups came together with people from industry to work on the National Institute for Standards and Technology process specification standards. And they themselves led to one of the ISO standards. But this core ontology is a simpler thing. It's an abstractly simple description of activity. And the INCA stands for issues, nodes, constraints, and annotations. It's meant to be a very simple underlying ontology on which you can base many of these representations. So plans, processes, capability models can be represented as a set of issues, a set of activity nodes, a set of constraints on those activity nodes, a set of annotations that underlie what the process or the model or the plan is about. And typically there we might catch a rationale behind it and its purposes and its links to objectives and agents. So this underlying ontology has been around for a long time. It's something I've worked on for many years, but a number of other communities have perhaps differently phrased versions of this. But these core ontologies have been around in planners for decades, planners for decades. And I wanted to use the abstract models that underlie these representations to see if we could map it across into these themes that were coming out of the mixed initiative learning environment work. So in particular I want to find ways to map learning objectives to appropriate learner activities. That's where the road mapping can come in. But I want to do it in such a way that you could have partial reasoning about that. And that's why these underlying properly ontologically based plans are important to the way I operate. And then as I said relate the educational plans and what you're trying to achieve in a learning sense on behalf of the learners, you want to map that to the plans of the domain, the description of what's happening in the scenario itself. So it makes sense in scenario terms. And that's what the roadmaps offer in the dissertation approach and the methodology that I was trying to develop here. And that will summarize as we go along in a minute. But then I wanted to use AI planning methods to actually compose some of these learning episodes. So you could have partial creation of learning episodes in an intelligent learning space done semi-automatically rather than having teams of people write those scenarios and drive those scenarios. So the roadmaps are something that typically do occur. You find them in professional businesses and government agencies, many of the large-scale programs in DARPA, for instance, in military research in the US all have these roadmaps of what they're trying to achieve. And typically what you find in a roadmap is that you've got a set of requirements coming in and you've got a set of proposed experiments and proposed research projects or things you're going to do. What you're trying to do is relate those and show how if you work on certain aspects of demonstrations or feasibility demonstrations and things of that kind, if you work on those, they achieve both your objective and demonstrate a technology. And typically the aim is for the people proposing technical experiments and technical projects and seeking funding for that to try to relate that to the objectives of the overall program. And I've been on programs where I've been a program manager alongside other program managers where we use the roadmap to drive what we try to get out of the different projects that people are proposing. So we encourage them to try to write nodes of these roadmaps. So they do show that their meeting requirement that feeds back to the potential future requirements and future opportunities of using their technology, but they're also showing their work well. They're demonstrating their work well by finding an appropriate node on the roadmap and appropriate thread in a particular demonstration we're doing. And now we're trying to use this in relating learner objectives to some of the situated actions that we want to try to encourage in the learning zone. So all of this is grounded in the fact that I'm interested in emergency response and in particular operation centers for controlling events during an emergency. So remember here we're not trying to simulate what's going on out in the field where people are in police cars or in ambulances or fire or they're landing in boats or they're dealing with the tsunami on the ground and you're looking at the layout of the land. You saw some of that in Christus talk yesterday for instance where they're very interested in the terrain and how you simulate elements of the terrain itself because there you're trying to put the simulation onto the agents or the avatars representing agents or onto the vehicles involved and onto the physical infrastructure. Here it's different. We're in a closed building. Typically it could just be a set of walls that we've got and we're not even seeing outside. Everything that's coming in is coming in via video feeds or radio messages, messaging and TV and so on internet. And typically we're in a closed room. Often it could be in a secure closed room. It could be a bunker in a facility that's meant to survive earthquakes or tsunamis or whatever. And I've got some pictures here. These are pictures taken in the emergency response centre for the Tokyo Metropolitan Government which I was able to visit and talk about that some of the work we're doing and look at the way their response centre works. And this has multiple levels. It has every level from dealing with the public through telephone calls through a command centre where the more technical people sit and watch what's going on with their sensor grids and watch what's happening with tsunami water levels and so on. Right up to an area where the mayor of Tokyo can sit with military advisors and can take decisions and is briefed. And there's a few pictures just in the slide set here showing what's going on with the emergency response centre at Tokyo. There are even mobile versions of these operation centres. Titan Corporation in America, for instance, make a truck typically bought by people like FEMA in America, the Federal Emergency Management Agency. And these can be sent out into areas which have been devastated with a natural or manmade disaster and can become mobile emergency response centres, setting up temporary communications traffic and so on. And you typically will turn them up with a truck like that and it'll act as the emergency response centre and coordination centre for the core in the field emergency responder. And we see these in Britain for our fibre gates, for instance, all our fibre gates of a control truck for emergency response coordinators, which is something like that. That's a picture of myself, my colleague Gerd Vickler, who sat inside the Titan truck. So you can see it's a closed environment again. We're taking video feeds, we're doing briefings, we're taking decisions and we're trying to do the best in an area we have. And the picture on the right shows a future emergency response operation centre concept that was worked on the DARPA programme that I was involved in to do with the planning initiatives. So these sort of operation centres are meant to be what we do. Now what I'm interested in is training these centres. So these are pictures of something which is the Personnel Recovery and Education Training Centre, PRETC, at Fredericksburg in Virginia, USA. And this is a training centre where people are going to become emergency responders or are going to be coordinators in real emergency response and search and rescue situations that typically train before they're deployed. And what's typically going on here is that someone's acting as a main search and rescue coordinator and they're dealing with all sorts of distributed centres, which in real life in the training centre are just along the corridor, but are meant to be potentially distributed across the world or across different agencies or in different centres and at different levels of authority. And then typically at the end of the corridor you've got a bunch of guys and a bunch of women who are sitting in something called the white cell where they drive the simulation, they drive the scenario, they have maps, they know what's going on, search and rescue people don't, they know where they're going to have accidents or emergencies or problems or where an airman is going to ditch an aircraft and they're working on this and trying their best to generate this dynamic scenario in order that the people inside the search and rescue training coordination room and the distributed rooms are driven to try out their procedures, understand what to do and work well to the situation that's dynamically unfolding. Now of course sometimes the people in the training centres work on it very easily and solve it all nicely easily so they don't really get much out of the learning experience. So the white cell is there to dynamically adapt the situation to make it awkward to deliberately cause realistic confusion for instance where there may be mistakes over things like call signs of the pilots involved so that you kind of get to the point where you think it might be the case that only one airman has been ditched in the water when reality it was because of a confusion of a call signs and some confusion of reporting and this sort of thing typically happens. So they've got this environment where they're basically a white cell, a main coordinator and the main people being trained and they all typically take a turn in that room when well being trained and then you've got these other distributed coordination centres so people see the problems of being underneath the control of someone and seeing problems of procedures either being followed or not followed. So this is the kind of context and we've worked for a few years with groups like the PRETC and Fredericksburg deploying some of our systems in experimental fashion. So what I wanted to do was bring all that together and then try to create a virtual space for this kind of intelligent training where you could actually try out that same sort of training you could have the white cell partially automated and you could try to generate dynamic learning episodes for people who are going to work in this sort of training situation and it's a multi-level experiment environment and I'd direct you at the MSC dissertation if you want a bit more detail, a bit more explanation of why I've got these levels but what I'm dealing with is the people involved, the agents involved and the environmental objects involved, the things which are actually let's say sensor grids in the field or things they could be things like water level sensors in a tsunami situation or things that you're getting reports from. So I wanted to both have a level of that that was inside a virtual space that's been simulated but also to link up with real external training people and this is typically what happens in these environments. You have some people who are outside the training environment who are operating in a physical environment and some people who are in the training environment and being given the opportunity to interact with the people outside in order that they can get that realism of feedback and some sort of realism about what's going on and it means it constrains the kind of actions they can take because there is this physical tie back to it. It's not all something that just can be stopped and restarted, time really flows, you can't just say stop now I need to think about this for an hour, something will happen during that hour in the environment which means if you're too late then that situation's changed and you've got to appreciate that. So this eye zone that was working on is based on some work we've already done on eye rooms as we call them which are virtual spaces for intelligent interaction. These are pictures of eye rooms inside OpenSim and we have similar things in Second Life and we can deploy these within a few seconds out of inventory that can be connected up to intelligent systems of various kinds including AI planning systems which can support stand operating procedures and suggest courses of action that can be connected up to external reasoning and argumentation systems and there's been quite a bit of work on this and we've reported on it in journals such as IEEE Intelligent Systems and IEEE Internet Computing so I'd direct you at some of the papers they're referenced in the paper for this session if you're interested but this is typically where people are coming together they're brainstorming and they're working through problems and they're trying to come up with courses of action and so we're typically looking at the issues involved with our methodology looking at the events occurring trying to make sense of that and then generating options for courses of action that we'd like to carry out and having argumentation and discussions on the pros and cons of looking at taking one of those particular courses of action and then we're kind of doing briefings on the sensible courses of actions to typically people in authority to try to get their approval but also to explain what their options are and then we're often actually enacting that either in a simulated training situation or in real life and these control centers are meant to be for where you have distributed teams rather than all in one location and you're trying to bring them together and you're trying to use the centers so as I said I'll point you at other work on these iZones for the papers that have been written to give you a flavor of what's going on there. Just to let you know that we've not just used these in-emergence response situations these same iRooms have been used with companies who for instance do multimedia game production. We've worked with SLAM Games in Glasgow and what is really an iRoom for creating games over a long period of time say three four months where people are talking about multimedia products and other things and meeting up with people in different countries who are doing artwork and music but they have this persistent space where they can discuss the game and it's evolving design and the materials around them and one of the other more fun uses of this was to create something called the virtual world of whiskey it's a scotch whiskey chewed tasting room where you can go in and interact with people who are doing chewed tastings in real life and interact with them in the virtual world to have chewed tastings and some of the automated systems behind this can control the screens can show clips of videos can bring up pictures of the processes of making whiskey can can use all sorts of other things and there's a youtube video if you want to look up on youtube virtual world of whiskey you can see a youtube video of us doing a burns night chewed tasting using our iRoom where we're interacting in a mixed initiative fashion between a real tutor who was was giving a tutor tasting down in London who was a director of a whiskey company and he was working with people then watching him and interacting with him in second life and the systems behind the scene are using a knowledge base of 15 000 facts that they could use a semantic web knowledge base of facts about whiskey and it was volunteering that information at appropriate times it had a natural language generation system that could let the assistants in the room do paragraph length composition of information that they could offer to the tutor to give to the audience and it also had the standard operating procedure support and tutor support and explain explanation of other processes by using the what we call the IX AI planning system so all that kind of technology existed and I was bringing it together for some of the work that you're now seeing in the iZone and we wanted to bring this together by embodying it in someone that could look like a tutor so if you see one iAustin in front of you this is four iAustins that we add here and these are all npc bots inside open sim cloned off my own appearance but all with they're all holding a little tablet as an attachment and the little thing that looks like an ipad or android tablet is actually an attachment that lets it can the the npc of the bot connect up to these external knowledge based systems intelligent agent systems so it can connect up to our IX planning systems but it could also connect up to chatbot systems like pandora bot alice bots in particular we connected this to the my cyber twin chatbot technology and what we had there is an ability both to chat and to use the intelligent agent to decide what to inject into the area so this gave us an embodiment and a framework for delivering some of the ideas that i've been showing you during this during this talk so remember that i was really trying to create resources here for students to be able to take and use and move forward with these same chatbots now with that little attachment can control the artifacts in this room as well as chatting into the room and chatting to the people who are being trained and injecting things into the scenario and acting like a colleague or an assistant however you want to see it inside this training space they're really becoming the equivalent of that white cell of train as i mentioned this this avatar could be thought of as one of the team members but deliberately trying to constrain and cajole and inject events into the training environment to make sure we're achieving our learning environments our learning objectives and this same bot can through chat to various devices in the room control things like the screens and the incoming messaging and other things so they can be a bit more like the staship enterprise sort of a computer where you can ask it to do things for you or ask it how you might do things or ask it to generate a plan of action or call up a standard procedure that you may have forgotten so it becomes a kind of a colleague in the room so this was part of the experimentation we were doing with the on the dissertation so overall the methodology that i wanted to uh to explain in this and that was meant to be a flow of it is that we we had this embodiment of this target training situation this was this eye zone and think of it as an operation center a virtual world operations center but to give an immersive and engaging user experience i wanted to have natural constraints from the scenario itself for what you can can't do so the interaction with this environment really gave you a realistic situation that gave you just those choices open to you that put you in this high quality learning point where you were in the zone as we might say for what you can learn and what you can do and how we want to push you and then we want to set up appropriate realistic challenging and motivational taster objective that make it fun and make it engaging exciting even uh for the trainers involved making it realistic and making it challenging really challenges the sorts of people we're wanting to do they're usually highly motivated emergency responders we've worked with real emergency responders in these training exercises and in these in some of the simulations we've done and they're driven by really having to make tough decisions because they realize it's good training and they realize it's it's it's exercising them in areas where in real life they wouldn't they would hope uh get to take some of the hard decisions that we force them into into taking so we want to then carefully select and inject scenario events into this environment so that we keep learners in this really highly motivated effective in the zone of learning uh for effective learning so it's about inducing context specific activity to get the learners to respond so i'm just going to summarize this and this is the methodology i want to that i want to uh treat as the outcome of my msc dissertation work the idea is to constrain the will situation and the activities which are possible in this in this environment and then select or generate perhaps semi-automatically relevant tasks and event events and then eject these into the learning situation to keep learners in this learning zone of highly effective highly motivated hard learning where they're learning most and getting most out of the training that we're trying to give them so we want to induce appropriate learner activity in context to do this so i'll just repeat the url for you hopefully that's even viewable on the ustream to people watching live and on the later recordings it's http call on slash slash eight eight dot dog slash os cc 13 and we're going to make sure that the full msc dissertation and resources continue to be available that url so thanks very much and thank you so so very much uh i i asked him uh at this time uh do we have any questions from the audience okay it looks like we have nothing i just thank uh everyone here for coming to the session and thank you again to our presenter austin tate for an excellent presentation right thank you and thanks surely thank you for coming whoops we have a question after all uh go ahead uh tom i'll repeat your question tom if you just want to give it put it in chat okay thanks surely but we need to tom christa do you want to uh all right okay i've got christa's question shall i read that out surely how many of these learning scenarios did you do so far so i'll come back to tom's in a minute but how many of these learning scenarios did you do so far so the the the work that preceded this was on the open virtual collaboration environment project open vce um we'd like to some of the uh of the assets and the the open sim archive files for instance the open source ones were created with clever zebra um and made those available in open source that work was done with us joint forces command and originally and the us army research labs their human resources experimentation division and that was done with a community of emergency responders um two sets and we did two week long scenario experiments for that and that's fully documented resources available uh the data was made available you know in a public way and it's been written up and and published in a number of journals including itriple intelligent systems so that was two week long exercises and that's the main of the actual experimentation we've done with humans involved with people who we're trying it out with um i explained that we'd worked on two of the projects we had two three man month projects which were funded by eu grants european union grants uh one with slam games in glasgo which took place over a three month period and one of these areas scenario based areas was created that way uh and the virtual world of whiskey now in terms of my own dissertation i didn't do any actual experiment further experimentation with other people we drew on that previous work as a community uh since this has taken place we've worked with a on a further project with the us army um which has been for the dismounted infantry collaboration environment work that's that works done with jeff handsburger um over in the us army research labs h red division and that again has not actually had an experiment involved with it it was it was a the development of the resources in order that they could be used our role on the project has been to create the virtual world and intelligent systems and web portal elements that then could be used by the experimenters themselves for their experiments okay thanks krister and let's go back to tom's question how does the system cope with emotional responses of the participants well i don't think i've got an answer to that the emotional responses so i think i'm going to need tom to explain a little bit more about what he's looking for in terms of that if if you're meaning in terms of uh trying to respond to whether the stress and things of that kind is that what you mean and whether we're putting them under stress indeed i see tom typing away yeah so if it's to do with that i don't think i have a good answer for you um we're we're effectively trying to ensure that the folks the folks are given interesting challenging and highly motivational tasks i've not looked at all about the the issues of of stress and levels of of attention levels of of excitement and in in our work at all uh i should say the work the way we're related back to krister's question the work we've done that i'm not a human experimenter or social scientist so in my work i've tended to provide an underpinning a technology a platform a framework that can be used in these experiments and then i team up with other people who are the human experimental psychologists who are doing the actual experimentation so when we did the world the whole of the the whole of society crisis response community work wasca project work that was funded by the u.s joint forces command and u.s army the h red people themselves at the u.s army research laboratory their human psychologists did actually did the experimentation and actually did all of the analysis of that and did the publications on that okay we have another question coming from krister i think just to comment on something that tom's saying about the white room or the white cell it's sometimes called in military training this is important this this this idea that you've got they call it white cell i think because they typically in a military situation they might have red forces and blue forces you know friendly and and enemy forces and they they're often kind of kind of create scenarios and the white the white cell is there to to to be the the people trying to keep people in this learning environment and trying to drive the simulation forward they typically inject events and they have a master scenario events list it's typically called uh where they they take things off the list and put it in at appropriate times to keep people driving along and this my approach is very much motivated by that idea of trying to put things in at appropriate times that are correct and seemingly right i've been involved in in something many years ago uh where with re-diffusion simulators on a on a project in the uk on the albu program in the uk where we were dealing with uh people who've been trained for navigation of of ships in the english channel off the the south coast of the of the uk and the idea was to put them into a situation where they had to respond as other boats came around them and if the if the people being trained typically got out of the situation too quickly you tried to put them back in it by by forcing them into situations where it was becoming more of an emergency more dangerous and they had to properly learn how to control their boats now along they took to turn and in in that case the people have been trained typically there'd only be one person who was who was commanding the boat and you might have had 20 or 30 other people controlling all of the various craft around them and this idea that you can use ai systems and automated partially automated systems to generate a plausible set of of behaviors of these other agents is something that i'm really trying to emulate in this next work okay i've got a question question from krister i'll just read it out in your msc studies did you compare any of this to mooks or is that even comparable with mook because mooks have zero awareness well i actually do do myself do a mook on ai planning with the on the coserra platform that we started last year any of you are welcome to join in that if you're interested because we're repeating it again coming up in january 2014 so i'm i didn't i didn't do any relationship at all between the msc studies and the mooks but i do an observation i don't think mooks have zero awareness mooks aren't something that i just posted online mooks do have people behind them they have the tutors behind them and a lot of us who are interested in in social learning and and collaborative learning and community oriented learning through mooks you know see mooks rather than the the sort of publish and let it go kind of mooks really treat it as a social event so they don't have zero awareness in my view because we're we're there as tutors we're there as as as professors and we're there as teaching assistants trying to keep it interesting and on our own mook for instance we definitely are the community of 20 or 30 people from from the research community in ai planning we're all heavily involved and heavily engaged with our students so i don't think i would directly compare it christa but uh i certainly wouldn't say mooks typically would have zero awareness either okay thank you again uh professor tate and uh okay thanks at at this point we have to wrap up thank you everybody for coming and uh you know we hope you enjoy your rest of the day at the oscc thank you thanks jolly