 Alright, it looks like our audio is good, so as soon as we get connected with it, we'll start. It looks like we're connected. So hello everyone, and welcome to the 9.30 breakout session of the Open Simulator Community Conference 2013. As a reminder to our in-world and web audiences, you can view the full conference schedule on our website at conference.opensimulator.org And you can post your questions in local chat, on the U-Stream chat, or tweet your comments using hashtag OSCC13. This hour, we're happy to introduce Rameesh Ramalal, who will be presenting virtual exercise design in the immersive virtual learning environments, recent emerging approaches. Rameesh Ramalal has been developing immersive virtual learning environments for diverse user groups during the past seven years on platforms that include Second Life and Open Sim. He is currently the CEO and CTO of Deep Simifor, LLC, an e-learning and simulation solutions company. Welcome, Rameesh. Hi, I'm very happy to be here. So just let me know if you can actually hear me. Just send some feedback. Okay, so here I am. I'm right now on the east coast in New Russia and New York. So I'm going to present today my experience designing virtual learning environments, especially for training. An experience that spans quite a few years. I've started around 2005, 2006 in Second Life, and now I've actually moved to Open Sim. I've moved to Open Sim the last year, and so the transition has been interesting. So I'm happy to share my journey with you and I'm aware that a lot of you are already probably on the same path, and you're self-involved in developing virtual environments for both platforms. So one of the things that I wanted to say about the title is that when I'm speaking about virtual exercise design, at the start I thought it should be actually four immersive virtual learning environments, and then I realized that most of the time I spend designing virtual environments for learning, it's actually in C2. I spend most of the time interacting with my audience, with my clients within the virtual world itself, which is kind of interesting, it's not typical. We have minimal face-to-face interactions. We do have face-to-face interaction, but that's almost like 10-20% of the time. So let me try to move here to the next slide. So what I was trying to do is to talk about the various challenges that I faced, the operational, technical usability, and learning and evaluation levels, so that perhaps it won't be as well structured as 1, 2, 3, 4, the list here, but most of the information is in there, and I'm going to try to focus on each and every one of them, and try to provide actionable pieces of advice that I thought I might be able to give, given the amount of time I've spent with a target user, users, and given the range of simulations that have been involved in developing. Regarding questions and comments, I'm aware you can actually, I don't mind if you type in your questions as they pop up in your mind. I'll try to focus on the chat here, and I'll try to catch them. Please don't mind if I miss them. Just keep typing and repeating the comments if I miss any. So what is virtual exercise design? I thought I would define this because sometimes it's not very obvious, given the feedback I've got from a few reviewers from some of my federal grants submissions. So what I mean by exercise is precisely a place. There are a number of common aspects of an exercise that I personally view as important. So let's review those. The first aspect I focused on is broadly speaking, since we are still at the philosophy level here. We are talking about the demonstration aspect. So when an exercise is being carried out, it allows people who are observing the exercise to get a sense of the various steps or actions that are needed in order to produce a particular outcome. And you can also, through observation, of somebody demonstrating a certain scale, you can also evaluate that scale. The second aspect of an exercise is practice. So when you have an exercise, one of the core elements of an exercise is the ability to repeat it, either to become better at it or whether you want to refresh your memory before you actually do the actual thing. And typically the practice is then in a safe environment. And the other component you look at when you talk about exercise is the collaborative aspect of exercises. So, for example, I think before giving out an example, let me say this. The collaborative aspect, it allows the students or the people involved in exercise to understand the nature of dividing and conquering complex operations through team interactions. So there are some exercises that don't require collaboration, but there are also a very large set of exercises that require people interacting with each other, dividing the task and actually solving it. And lastly, the last component is the evaluation. It kind of is tied up a bit obliquely to the first one, which is like when you actually do the demonstration part. It enables you to measure the learning and skill acquisition, and that's what you want to do. You want to measure ideally skill transferability from virtual to real. So those definitions, once we are clear about them, we can start thinking about what is virtual exercise design. So what we want to do is we want to take the exercise as it is done in the real world and try to shift it in the virtual world and try to see whether the virtual exercise provides affordances for the aspects that we identified to be the main ones of an exercise. All that is philosophy, but at least we are clear about what we are talking about. So some of the previous projects I've been involved with that involves virtual exercise design, the first one is pandemic flu influenza, emergency preparedness training, effort that involved a number of universities. So here, as you can imagine, there was a lot of the collaborative aspects of an exercise that came into play. So if you are teaching a start protocol or a triage protocol, whether it's start or elevated severity index, ESI, it's actually a deeply collaborative activity, or when you want to explain what is span of control and how to assign different roles to different individuals, and then unleash all these students in an environment and then have them solve a particular emergency response task. There's a lot of collaboration that comes into play in here. So that's just describing the goal of the first project. The second one is a bit more the demonstrative aspect, and there are a number of projects out there where we have avatars performing exercises. There are researchers out there who actually have produced results in an oblique way showing that if you have your avatars doing exercises in the virtual world, it's translated into the real, et cetera. On my end, I was looking from a different angle. I was trying to look at how to use the virtual environment to help somebody perform yoga poses in a very detailed way. I'll talk a little bit in further detail. The third one, I was involved in a virtual reality therapy solution that promotes behavior modifications in individuals with daily living challenges. Many of these individuals have social behavioral issues, and we had to design a virtual environment to teach them some basic aspects of daily living. So that's another project that I'll touch on. The last one that I'm currently engaged in is in hazardous materials training. As you can imagine, that's kind of related to the emergency response training. We'll get a chance to touch on that. As I said, I won't go into details about what the projects are, but I'll pinpoint on some design guidelines that I learned so that you can actually take those and inform your own efforts. It's just an opinion. You might think that that's not the right way to go about doing things. The first kind of exercises I'm talking about is the very simple kind where you have somebody actually doing a physical exercise. Yoga is done in the real world. You have a series of poses, and there's a lot of attention that is placed on breathing, and you need to be able to understand the two in order to get the benefits of the exercise. In this case, what I tried to do is that I spent a lot of time trying to animate an avatar, but at the same time, what I did was I tried to represent or allow users to see things that they cannot visualize in the real world. For example, when you have an instructor teaching you about a vinyasa, which is like a flow between various poses, there are changes in the breath pattern. It's not easy for somebody doing an exercise and at the same time say, okay, now I'm breathing in through this particular pose and I'm breathing out and exhaling through the other one. It's kind of difficult to do the exercise and through speech and breathe the right way and teach your student. Every time when I deal about trying to implement something in the visual, there's something clicking in my mind. What can I do in the virtual that cannot be done in the real? To me, that's the key. I don't want to fuss around a lot of unsealing reasons why we should do it in virtual. Let's look for a very hard-hitting goal. In this case, you have the person doing some stretches in the sitting position and then you're doing a downward dog. At the same time, you get to see the kind of progress bar above the head and the circle that's above the head that gives you some information about how much inhalation has occurred and how much exhalation is happening throughout the animation. Just an example, highlighting the demonstrative aspect of a physical exercise. Let me move on to the next example here. Oh, wow, that slide came in fast. The reason why I talked about the previous example is to try to touch like a bee over just trying to fly over a few flowers and show you various aspects of exercises. One of them was the yoga one because I felt it was the best example to illustrate demonstration of something in the real that you can enhance in the virtual. You actually choose something like trying to assemble an engine or dismantle a computer in short of just parts, etc. That's the example I chose because I was involved in that project. Very quickly here for this slide where I was planning to talk about issues I faced when migrating from single life to open same, but I see that most of you are already familiar with these issues. Just like Kay mentioned before me, the transition is only difficult in the beginning at least for me when I realized that I had to live around more than $150,000 of content which was trapped in the second life platform. Even though you can sell things on the marketplace when it comes to projects that are funded federally or by clients there are a number of things that really cannot be achieved if you're using a second life platform. I was a bit afraid about the technical capabilities of open same but I'm much less so right now. I also see there are some technical advantages to actually develop in open same. Even from the basic user level there are a lot more technical advances in open same than in second life. That's a pretty controversial thing I'm seeing here but when I realize, for example, that now I cannot do away without non-player characters and I see that was not available in second life for example that's a big no. The only thing I might be a little worried about is about the physics engine. We have been promised the bullet engine for quite a while I'm still waiting and hoping that we are going to get a good physics engine but the core reason why I'm in open same is because I am in full control of my content. That's the real reason and the other reason is that for the client they don't need to keep paying second life just to have their content hosted and when we talk about content it's not content in inventories it's content which is on, you know, deployed on a same. Two different things because it takes a lot of efforts to actually take things out of your inventory and assemble on a region or on a same. Okay so I'll move very quickly to the next slide because I don't spend too much time on the transition. Okay so now we are coming to things that you can use and it's kind of controversial but it's okay I don't assume that that's the best thing there are probably many better ways for implementing things. So one of the things I've noticed is that even though I use it myself it tries to avoid numerical scores to represent progress through an exercise instead strive to represent progress through changes or modifications made to the environment as a result of actions performed during the said exercise. So for example I can choose to have a student solve a certain task and then give, you know, like a number to how fast they completed it or I can design the environment in such a way so that they are actually assembling the thing you can just by looking at the evolving result in order to get a sense of how much progress has been made and just by looking at it, you know, because form usually the form of something gives you an idea of the history of the steps that led to that current stage so there's much more information in that than just giving a score. If you have, for example, a shooter than a game usually you have people, you know, the current design principle is that you give the person like a score a number but you could also just have corpses lying around and give an idea of, you know, how much progress you're making in a combat situation. There are also, you know, of course there are many optimizing technical issues that come in that sometimes pulls you away from this guideline. Okay. Right now this is what I think should be the ideal and we can try to optimize things if we find that technically it's not possible to do so. The second thing I realized is that the dividing line that there is a dividing line between the casual users and virtual environment designers. Okay. If you look at the user interface of the viewer it reflects powerfully the assumption that the need to manipulate a virtual environment is primarily of concern only to the 3D modeler. So what I mean by that is that, you know, if you look at all the tools that are provided in a viewer most of them are not, you know, that's something obvious that I'm saying so I apologize. Most of users don't actually use them because, you know, most of them walk around, touch and they want to interact with environment in a way that they do in the real world. Okay. So there is the issue of trying to satisfy both users here, well, not users, the designers, the three designers and the casual user. This has been debated in lists, you know, for ages but I'll give you specific examples that will make this much clearer. Okay. Yeah, that's kind of repeating the last one. The 3D modeler-centric view UI is a barrier to target user interaction. Okay. There are two things that I found to be at least with my target audience which are typically non-players, non-computer game players. They have a lot of difficulty with the camera and the inventory is something that is... Even though it's an obvious thing to have for us as designers, it's one of the aspects of interacting in virtual worlds as it is now that is kind of troublesome. And I assure how having actions that require the inventory interaction, it really hurts flow. So one of the guidelines regarding camera control, at least what I do, is whenever possible, I try to create situations where manual camera control is kept to a minimum. That's kind of difficult to imagine in a world like Second Life or Open Sim. We can have some automatic camera control. We sit on a chair and have the camera just swing to the right position and orientation. If you're looking at the booster, you have auto zoom, you have all kinds of tricks like that. But the broad guideline is I'm trying to, whenever I design, I try to minimize manual camera control. Perhaps this will change when we go and change the user, the actual hardware for accessing virtual world, or worrying the HMDs. It's not going to be a problem, I think. Yeah, HMD for head mounted displays, or with head tracking and all, just like the Oculus Rift. All right, the other thing about inventory manipulation that I just mentioned is it should be kept to a minimum because of their propensity to dampen emotion and flow. And I'll show you an example where, I'm just hoping I get a picture. All right, so here's a picture. Initially, when I was working in Second Life, I would spend a lot of time teaching people how to interact with the inventory. You can imagine somebody goes into a virtual environment and then they spend a lot of time learning about the inventory and camera controls and chatting, working without having done a single thing for the very first training session. And this is deeply discouraging. And most of the time, with people with narrow attention span within space and time, it becomes... It's not easy to actually keep people interested. Whereas in OpenSeam, one of the things you can do, for example, is that you can have this, you know, first attachment of objects. So in this case, for example, you can see the lower picture. If somebody wants to wear all the personal protection gear for a firefighter, they can actually just walk up to an unplayed character displaying all the things that you need and click and get all your gear up. Okay, the same things for all the other uniforms. So by processing the inventory, the only problem with this is that, of course, the attachment objects get stacked up in your inventory. So inventory management becomes an issue. So if you start with a goal that you need to have an easier way for people to wear things, that leads immediately to new ways of dealing with attachments and whether they need to be stacking up and thousands of copies made and fill your inventory. To me, it's obvious that that's not the way for doing things because at least I am clear about that. Okay, the next slide is... I'm just showing here, for example, a user, instead of at a click getting all the gear, you want them to know where they can actually access these various pieces of equipment in the real world. So you have a fire truck here and it has all these different compartments and the user has to walk towards the fire truck, open up the various compartments as needed, and pick the items and as soon as just a single click, they would just wear the right thing here, the other person is wearing a hat. They could wear this CBA or they could even pull out a fire line, a hose line and things like that. But really you want all actions to bypass the inventory. The inventory really destroys flow. It cannot stress that more. And I think that has a lot of implications regarding how we should move forward when we think of improving the viewer or even the underlying infrastructure of OpenSend. Okay, so as I said, direct manipulation, that is our urge to change the environment, that's different from a user and it's not the same as somebody pulling, pushing, and importing mesh objects and adjusting and all that. Two different classes of things. We cannot assume that those two things are going to can be satisfied by a single set of tools. We should actually script direct manipulation either at the scripting level or if we find that some of those direct manipulations that are useful for end users, we can try to take those and inject it back so that people who do the 3D modeling can also benefit from those kind of advances made for direct manipulation for the users. I'll give you an example, don't worry. Okay, so let's move to the next slide. Okay, that's a good example. So while it is trivial for a 3D modeler, for example, to pick an apple and put it on a table right through 82 tools, okay, this task is a frustrating one for a casual user. So let's say you work in a virtual world, I see an apple in a basket, I want to put it on a table, I'm not a 3D modeler, I would wonder how the hell do I do that? Is it stuck on the table? I mean in the basket, you know, it's kind of... So we are really moving from the traditional view of games, could somebody let me know how much time is left? It's 1-4, started at 12.30, wow. Okay, so that's a difficult task for the users. So how do I make it simple for a user? Next slide. Okay, I've talked about how I'm providing solutions through scripting. So given two objects, what is the most simple way we can select to facilitate this task? Okay, well, it's just because I feel that what I'm speaking is much more fluid and I might have repeated myself through the slides by becoming more, you know, by explicitly writing what I was trying to say. So I'm sorry for flipping the slides too quickly. All right, so I think what I was trying to say here is that when I was trying to solve the simple problem of selecting an apple and moving it and placing it on a table, I came up with a number of solutions and each of them had a problem until I landed on a solution that doesn't seem to be quite intuitive. So this is what I did next. So if you can zoom on the picture on this table, I was trying to have a setup where a user who is trying to learn about meal planning, they had to pick objects from the basket, you know, in utensils and lay a table, and they had to do it very in a direct way without having to go through the inventory. So the first problem is how do you select an object? So you can select an object. In this case, I had to have big handles on each and every object but when you click on the handle, it tells you which object is selected. But you can see that visually this is not appealing. There's, you know, the handles. It's difficult to design. It forces people to have to learn camera orientations so that they can actually see the handles so that they get to click on. Okay, so very quickly I moved away from this approach and I landed upon some other solution which I call temporal grasping which is slightly different in the sense that an object is selected only when you keep left clicking that is touching it for a minimum amount of time. Okay, so if I have an object like an apple and a plate and I have to put the apple on the plate, okay, I press left click on the apple for a minimal duration. Okay, not for a minimal, sorry. For more than one second and then it gets selected and then I click on the plate and the apple gets placed on the plate. It's kind of something, it's like I said, it's a very simple problem but the solution is not to include it. But when I experimented with this approach I found out that it's actually quite workable. So in this case you don't need to fiddle with your camera position in order to find a handle on the object and click on that handle until the system that the object is being selected where appropriate object and then click on the other object and say that, okay, now I'm going to... so that the system knows that the other point is actually the target position and then that's how the system knows that the apple needs to go on the plate and you don't need any extra handle and when we look at the environment it looks pretty much the same as any 3D environment. So for example in this case I could make it easier for somebody to create, to make a sandwich. Okay, it's not one of the best things you want to do in a virtual world but if you're explaining things about new planning for example you have the user code there you have a number of food items on the table and just by, you know, clicking on the objects various objects and they can put it on the plate. Okay, alright. So all the objects in this environment are targets. They are not specific targets. The environment is fully graspable in the sense that if I see a button I just flip the button, I can put it on the table and then put it on the refrigerator. It's like every object has got scripts that enables them to become a possible target. Okay, so as a design guideline you don't want to add too many notifications in your environment. It makes things really ugly and noisy and you want to avoid that. You want things to look as far as possible the way they would look any additional information overlaid on them. So that's my design goal. It's a hard one but once you find the solution you'll find that that really improves your interactivity a lot. Okay, next slide. So again the same principle here I'm using it to demonstrate how you can use the same approach to allow users to create content. In this case it's not exactly legal it's a bit more than legal in the sense that you're picking any one of these objects and you're assembling them together. So for example if you want to put a flower on a stem you'll select the flower and then one second left clicking on it gets selected and then just touch release on where you want it to go and it's going to go there automatically and it's going to be oriented automatically. Basically what I'm doing is that the normal of that object just gets aligned with the normal at the point of the target. Simple things like that has tremendous impact on direct manipulation by the users they don't even need to know how things are working what they see is that hey I have two objects here I want to put one on top of the other and selecting one and putting it on the other just in an intuitive way and I didn't need to do any extra manipulations. With the same basic infrastructure you can just build a board game very easily without any extra work and you can actually just users move pieces around just like they would do in the real world select a piece and click somewhere else that piece is going to go there. So when you're faced with this kind of direct manipulation then you start thinking is this not something that needs to be a feature of the viewer from ground up rather than have all these different things implemented in script. When you implement things in script you're actually wasting a lot of resources and there are also many other constraints the number of real listeners for example is limited and you really need to optimize your code a lot in order to make those interactions possible in order to have a large number of objects that would have the same kind of property in an environment. Okay, so as I said there are technical constraints that hurt usability but at least we should have our design guidelines clear and then we try to optimize and try to work around if possible. Okay, and as you can see right now how different our class of virtual environments is from the typical games that are out there. In a typical blockbuster game you have a very narrow set of interactions whereas in OpenSeam or SecondLife you have a lot of opportunities to have much finer interactions but we do need to spend some time in order to facilitate those interactions. Alright, so I think I'm coming close to my last few slides here on the evaluation aspects. Okay, so my kind of evaluation is directly rooted in the following principle. I actually create an evaluation exercise to help me find out what I was already trying to achieve. My evaluation exercises, at least the way I designed them is not to throw my original goal out just because the results tell me so. For example, I won't find an evaluation exercise that's going to evaluate face-to-face versus virtual and look at the results and say, well, you know, face-to-face is better and therefore I'm going to throw virtual solutions out of the window. No, I believe in the virtual solution and I'm going to try to construct evaluations that allows me to improve it. So a lot of my evaluations are directly focused on the nitty-gritty of the human-computer interface interactions in the virtual world. And this has allowed me, for example, to know that having interactions through the inventory, for example, is maybe 100, 200 times worse than just having interactions straightaway inside the virtual environment in situ. Okay, so those are my idea about evaluations. Of course, then you have other levels of evaluation where you want to evaluate learning and you can bring in the educational technology folks and they'll be looking at the boom taxonomy framework and all these things that look complicated for me. And then you can try to evaluate at a different level whether students have learned anything or whether the things that they have learned in the virtual environment can be translated into the real world. It's 117. I've started at 1230. So I think I should be stopping here and going to try to answer your questions if there's any. Okay, I'm going to scroll through the chat here and see some of your questions. This is, we're getting ready to close here, so if anybody has any last questions, please ask them now. Thanks, Professor Chatterbox. Well, none of this work was grand-funded, unfortunately. I have moved from an university position and I'm now 100% growing my personal company and a lot of this work is coming from, you know, defending from private clients. Ramesh, do you have any last comments? Well, what I think... Well, my last comment was that last year I was a bit worried about, you know, shifting to second life, but now I've shifted to Opensim, but now I feel fairly comfortable with the Opensim environment and I think I'm here to stay. And I probably will have a major product release soon. So, you know, I hope to be back with some more new stuff in the near future. Thank you. Thank you, Ramesh, for a terrific presentation. As a reminder to our audience, you can see what's coming up on the conference schedule at conference.opensimulator.org. In this room, the next session will be the fantastic voyage of converting to Opensim for biology and archaeology education at 1230. Thank you again to our speaker and the audience. Thanks, everyone. I really appreciate your patience. Bye.