 Hello everyone and welcome to the 9am session in the developer and open source track. As a reminder to our in-world and web audience, you can view the full conference schedule at conference.opensimulator.org and tweet your questions or comments to atopensimcc with the hashtag OSCC14. This hour we are happy to introduce a terrific session called Dispatcher, a secure external script interface for OpenSIM. Our speaker today is Mick Bowman. Mick is a principal engineer in Intel Labs and leads the virtual world infrastructure research project. His team develops technologies that enable order of magnitude, scalability, improvements in virtual environments, opening the door to new levels of immersiveness and interaction among players. Welcome all, let's begin the session. As we were talking beforehand, last year I got a chance to come up here and talk a little bit about the back end services and the work that we had done for Simeon on a robust alternative. This year I get to talk a little bit more about the interface to the simulator and some of the things that we've done to enable more broad interactions between OpenSIM and external technologies, whether they're databases or simulation engines or just management scripts. The talk is basically going to be about Dispatcher. It's a project that's available on GitHub so you can grab the code there any time you want and feel free to contribute to it. It's at least a start on and a reasonably surprisingly complete, at least as far as being usable for some external applications, start on an API for accessing the scene. So the Dispatcher really came out of some work that we were trying to do probably two, three years ago, where we were trying to take an Android phone and use it as a 3D mouse for controlling objects in the virtual world in OpenSIM. And so the basic idea was we ran an application on the phone, an object in world, if you touched it in a particular way, it would generate a QR code, the phone would scan the QR code and create a binding between the object and the phone, and that allowed you to send all of the sensor information on the phone into the object. Likewise, the object could actually project a web page on the phone so that you could actually get extended actions that you could take on the object. And so we used it for driving cars and a few other things like that. But what we found was that it really didn't work very well. And the problem is that we just couldn't figure out what the right kind of interface was to connect to the simulator to bind the communication between the simulator and the phone that had all the right properties, that it could handle the data rates, that it could handle the computation, that it would be transportable across different regions. And that's kind of where we got started on the work on the Dispatchers. We wanted to generalize that interaction at some level. So extensibility in OpenSim can really come from several different ways. I mean, one of the nice things about the platform is that in some sense it has too many hooks for places that you can add to it. Obviously, most of you who've done any kind of building have used per object scripts. And if you've done any kind of serious coding with LSL, you are intimately aware of its limitations. Again, it's really good for managing an object or a link set. But if you're managing collections of objects, it really kind of sucks. It's really good for dealing with kind of simple data structures and simple applications that way. But if you really have kind of complex things that you're doing, it breaks pretty fast. It's hard to do, for example, simple associative arrays and other things like that. And if you have any kind of structured object that you're trying to manage, it just blows up. And some of that, the JSON store modules that we have in OpenSim right now do allow us some level of sophistication and data structures. But even then, the limitations of LSL are apparent in the APIs that we have to use it. So LSL is really good for doing interface kinds of things, for generating events and handling interface kinds of events. But if you're doing something like we did a gravity simulation with something in the order of 1,000 objects, trying to connect to one another, and it just, I mean, it does not work for that. So the other sort of common approach for doing these things is to come up with region modules. And region modules, because it's C sharp, because it has access to all of the internals in OpenSim, and I say all in quotations, it works really well if you can figure out how to get the import to work that you don't have cyclical dependencies in your classes. But it's really, you know, it's C sharp, it's Turing complete. You can do whatever you want to do in it. It does have some limitations on it. Specifically, region modules have to be pre-compiled into the simulator. Yeah, there was some work on dynamically loading now, and that kind of works. But it still means you have to have console access to it. So if you're programming behaviors through region modules, the consistency of your experience is going to be kind of limited. So it works really well, and we've got several out there. I think the way that we ended up doing the gravity simulation was through a region module that specifically exports a simulation engine on that for end body problems. But again, it works in one location. Underlying the struggle that we had in trying to figure out how to work with an external application that wanted to come in and actually manipulate the scene is that there really is no API. I mean, you could go back and you kind of look at the implementation of the LSL and OSL functions as one version of the API, but there's so much LSL-isms in it. It's really limited in its data structures. You're using, instead of associative arrays, you're using lists that are encoding the bindings. The LSL API has functions for tangent and arc tangent, which have nothing to do with 3D scenes in and of themselves. And most languages that you have already have those. So starting with LSL and the script interface and the set of functions that are exposed by the scripts, it's really not a scene independent or language independent method of accessing it. Region models go clear to the other end, which because they expose everything, you get a scene object group and all of its glory. And many of the things that are in the interfaces to see an object group are really intended for internal use. And I get to pick on Justin here because there was one of the changes it was made in the naming of some functions, which was a good modification to the name. It was the right thing to do, but all of the region modules that were invoking that particular function broke. And so we now have a version skew in the region module. Some of them that are designed for the older implementation, and some of them are designed for the newer implementation of that particular function. The behavior didn't change. It was just the name. And so we end up in these situations where it's really hard with the region modules to have any kind of predictability or any kind of migration on it. So it works once and it works for one version, but maintaining it because there's nothing to make consistent across it was just very difficult for us. So that's kind of where we ended up with going back and looking at the dispatcher. So what we wanted to do was to architect something, which was really a set of interfaces for the core functions of OpenSim that were independent of any particular language that was being used to access it. And you'll see we'll talk later on. We've got libraries that allow us to do Perl and Python invocations, and of course the obvious C-sharp stuff. So the API that we came up with for the dispatcher provides access to things related to the scene and assets, objects, avatars, events. So for example, we do things like we're able to pull down an avatar's appearance, store it locally, serialize it, store it locally, and then reapply it. We can make changes in location where an avatar has moved or any of the objects. So there's a set of operations for dynamics on it. We can take an object link set that's in the scene and generate the asset out of it, store that locally. And then when we want to recreate it, we can upload the asset and manage that. There's also, like I said, a set of event handlers, and I'll talk about that in a little bit, but things like touch events that allow external bindings so that the simulator really can be used in a sense as a front end for behaviors that are defined in other places. The dispatchers also intended to be language and transport independent. By language independent, I mean that it's not bound, the data structures themselves are not bound up into any particular programming language. So it's not LSL. It really is independent of the programming language that you have. And like I said, we've gone back and we've got Python, Perl, C-Sharp, client implementations that you can use for scripting. And it's really not hard to extend it out. The idea behind the transport independence is that we wanted the API to be able to support that the API itself should not be dependent on the lower level encoding of the capabilities. So we didn't want to bind ourselves too early to, for example, just the things that HTTP can do with synchronous calls or particular message encodings. We wanted to be independent of the set of coding that we have. And so it really is a pretty hard layered model with a transport layer, a presentation layer, managing the encoding, the interface of the messages themselves. A message layer, the message layer is really where all of the interesting API definitions are exist. And then a message handler layer. And that's how it ended up being divided up. We wanted high performance. We're driving a bunch of external simulations that are high-end simulations and we're trying to use OpenSim as a visualization for that. And so what we really wanted was physics engine kind of performance. So thousands of updates per second. The ability to handle thousands of updates per second. And then the final thing is that it has to have some security model. And we're a little different because of the connections that we have and the applications. A little different than this security model that would exist in OpenSim. It's not per object, it's per class of operations. And that's really what we're trying to make available through the security model, is to associate a severity level with the set of functions and then require authenticated access to those functions. So the next two things I'm going to show are just a couple of demos and it's just videos for it. I've got a region that's up and running with traffic simulation stuff that I don't think I have the hypergrid URL in here for you. But if you ask me afterwards, I'll get that for you. The first one is just a simple Perl script that generates Penrose tiles for a region. And so if you're familiar with Penrose tiles, there's basically four shapes that allow you to code any region to cover any two-dimensional space. So with one of the ways of doing the implementation, I mean you really can't do it in LLSL because there's too much math. To do a good job on the performance of it. So we pulled all of the tile generation stuff out into an external Perl script. The Perl script is then resing a bunch of objects. Each of the objects, again, it's taking advantage of the JSON res object so that it's given a chunk of JSON as its start parameter, which basically tells its orientation and size. Just a moment, yes. For those who aren't able to get the video in world, I'm seeing it on my screen. Yeah, me too, and I'm sure it's not being very reliable, but there's the YouTube link. Yeah, so I've got that one and all. Let me post the second one up there as well, just a moment. Okay, so try those two links if you can't see what's going on on the first one. But basically, again, the first link that I posted is the Penrose Tyler. It generates about, I think it's something on the road of 2,000 objects in any of the video that I just posted in about a minute when it does that. So it's pretty fast in its connections to things. The nice part about it is that the script itself can take different center point for, it can take different sizes and shapes for the amount of space that it wants to cover. And all of the computation for those kinds of things is written in Perl. And so it's really nice to be able to do that. We also have, I mean, if you go out to the GitHub repository, there's an administrative function that allows you, for example, very conveniently through from the command line to delete all objects that have a certain pattern. So, yeah, Krista, the script that generates the Penrose tiles is all written in Perl. And I'll come back to its kind of anatomy in a minute. The second video, again, if this works on the screen, that's great, but I've posted the link in URL. Hey, be nice with the Perl comments, Krista. But the second one is just some of the output of work that we're doing on driving some traffic simulation. And so we're actually, the project that I'm working on in Intel is actually looking at transportation data, location data for collections of people, how that data can be used to identify individuals, and then different ways that we can actually protect the identity of the individuals based on the location data. Well, it's really hard for us to actually go out and get that data from people because the very act of getting the data is a potential privacy problem. So we're using simulation in order to generate the data. Well, it turns out that traffic simulators are notoriously finicky when it comes to layout. And so we needed a good, fast way where we could actually watch the traffic easily and see where the hotspots were, where the traffic jam is actually coming up. And the visualization for Sumo, which is the traffic simulator that we're using, which is one of the more popular traffic simulators, really kind of sucks. And so we connected it into OpenSim in order to run the visualization. And what we've got, and you can see a little of this in this scene, I think that the video that we have up there is something on the order of about 700 active cars at the time that we were running the video. Every one of the cars is having Sumo's updating its position, something on the order of five to seven times per second. And we're driving those position updates into OpenSim. The dynamics messages that the dispatcher interface provides allows us to batch up dynamics calls. We're going to have multiple threads executing simultaneously on it. And so we're actually able to pump something in the order of several thousand updates per second on the dynamics in there. The problems we have are as much because of the viewer as anything. We have to get velocity and angular velocity acceleration, angular acceleration, and things like that in order for the interpolation to work well. In the viewer and all of that's done. And so it actually works remarkably well. It looks really smooth, even with several thousand cars in the region. So the architecture for this is really pretty straightforward. We've got OpenSim with no Sumo specific code in it. All it's running is the dispatcher with the interface that's exported. All of the intelligence about driving object updates about where the cars need to be moved, for example, and how to res them, is done in a bunch of Python code, which is connected to the Sumo simulator. So Sumo is driving the position of the cars. It sends a set of updates to our Python code. The Python code turns around and makes the updates to the objects that are in OpenSim. You know, if you watch it real time, it really is approaching physics engine kind of performance on that in order to get it to move. And yeah, getting the interpolation right was, I should say, the most challenging part of the code was not getting the dispatcher to perform, but to figure out exactly how the interpolation was working in the viewer so that we could make the movement of the cars as smooth as possible. So it was a challenge that we can talk about in a different situation. All right, so let's talk a little bit about the dispatcher architecture itself. At the lowest level, we support both HTTP and UDP transports. The dispatcher is message oriented. There's a set of messages and message types, and then there's a set of protocols for connecting request messages with response messages. There's also a callback mechanism, which allows you to send a message that requests the creation of a callback, and then certain operations we'll use, we'll have a callback interface into the request message, and so the results for any asynchronous operation can be passed back through the callback interface. In order to get that messaging interface at the bottom level, we've got synchronous and asynchronous transports, HTTP and UDP. The messages and message types are all currently encoded in either JSON or BSUN, but there's nothing particularly limiting about that. The messages themselves at the messaging layer are independent of their encoding, and so you could use protocol buffers if you wanted, or some other wonderful, highly efficient binary encoding if you want. What we found is that with JSON in particular, we have some limits on the decoding of the JSON code, but as soon as we went to BSUN for the encoding for it, things got much, much faster on the decode. It sped up the decode performance by about 200%, which provides us a lot of really nice performance advantages. You can use JSON in cases where you really don't care, because it's really easy to generate the JSON. You can use BSUN in cases where you want performance, but it's a little harder to debug the BSUN. At the messaging layer, there are message domains, and each of the domains has some different kind of... So, Tony, just one comment about the parsing strings in JSON is slow, but there are really good JSON parsers. The BSUN is a binary encoding with string links explicitly included in it, so the performance of BSUN parsing is much faster, because it's almost explicit. Back to the messaging layer. Each one of those has a set of domains. There are two domains that are... Actually, I left one out. There are three domains that are specific to the operation of the dispatcher itself. There's the information, which is the interface query. Basically, it says what messages does this endpoint support, so it allows interrogation. The authentication is really about creating and destroying and renewing capabilities, and I'll talk about how they play a role in the security in a minute. Then the third one that's in there is the endpoint management that I already talked about a little bit, and I'll come back to you in a second. Then there are domains of objects related to asset management, to avatar positioning and appearance communication, which allows us to send messages into the region and receive messages from the region. Objects for doing everything from interrogating object inventory to creating objects to moving objects to resizing and repositioning. There's a set of dynamics related to that. There are messages for managing the region, for doing administrative functions on the region, terrain objects, and then a whole class of event objects as well. Like I said, we've got implementations in C-Sharp, Perl, and Python right now for the client. This just gives you an idea of the kind of structure of the messages that we have. So there's a base request message, which is how we attach all of the encoding and decoding of the messages themselves. And that base message has some properties that are available across. And the one that I would point out here, every message has a capability that goes with it, and that's how we actually allow the security to be enforced. So rather than having to go back and re-authenticate everyone, you generate the capability. The capability is stored. The capability that's associated with it gives you access and permissions to execute or to perform messages in a particular domain. And then the find object request is on top of that, and it's just a set of parameters that could be passed in. In this case, there's a set of queries that could be performed on the region, both in terms of the space over which you're looking for things, the pattern for the name on an owner ID. And for each of the request messages, there's a response message. In some cases, the request itself is intrinsically asynchronous. So for example, I want to register an event handler for a touch event. There's still a response that can be generated for that, which is basically the response that comes back is the capability from the server, which will allow the client to know for sure that the messages that are coming back from the server are coming back as a result of that event. The find objects response, every one of the response messages has a success and a message as to whether or not it was useful, and then there's response-specific stuff. So in this case, the find objects sends a pattern over and what comes back is a list of object IDs that match that response. So it's not particularly difficult to figure out how to use this. You create a capability, and once you've got the capability set up, you just start sending messages and waiting for the responses. If you don't care about the responses, you can set up an asynchronous request, in which case the responses are simply dropped. And that's actually pretty useful if you're doing, for example, dynamic stuff where you really don't care about the success, because you're going to overwrite it again in a couple of minutes anyway. When we've got the callbacks, it's basically request the capability, create the endpoints, and the endpoint is a data structure that's stored on the server, which allows the server to send messages back to the client on a particular pattern. Right now, honestly, the only thing we have implemented is UDP callbacks for the remote endpoints. And for the kind of applications we have, that actually works really well. It's not that hard to extend it out so that we get both the HDP and UDP-based callbacks on it. All right, whoops. Let me go back to the security one. We're almost done. So the generation of the capability is, again, pretty straightforward. And you provide it with the information about either your user ID or your email, whatever it is that the particular authentication scheme requires, the hashed password that goes along with it. And really the most interesting parts on the capability is the lifetime and the domain. So capabilities have limited life. The server can set a maximum lifespan on a capability. If it wants, for example, to make sure that the capability is not misused, it can say you're going to be required to re-authenticate your capability every 60 seconds. And if you're doing dynamics, that's not bad. It's not an unreasonable thing to do. If you're doing administrative operations, it tends to be okay as well. For doing development, you want them longer because re-authenticating is not fun. There's a capability response message which comes back to the client, which includes endpoint information and the capability itself. And then you simply stick that capability inside all the messages that you're sending. The domain gives us a way of limiting the scope over which a particular client application can get access. So administrative functions, for example, administrative messages generally require an account with account status over 200. And that's basically what we're doing. Okay, so again, I'm running a little late, so I'm going to just close up with the next two slides and we'll be done. So the kind of applications that we've actually built with this thing, I mean, we already saw the external content with the Penrose tiles. We've done the external sensors, and this actually works really well. So we have system monitoring tools that are generating events that are being sent into the world. And we can actually take actions based on those. So we've done it with the phone that I mentioned at the beginning and some server stats. Most of what our work is right now is on data visualizations, where we're actually using OpenSim simply as a 3D interface for some rich simulations that we're doing on the back end. There's also a bunch of region administration things so you can have to clean up regions and move objects around in regions and things like that. The two that are kind of interesting here that I'll come back to, and by the way, Tony, let's bring up the question about how fast things are later in the Q&A. But one of the nice things is that I'm running really thin on inventory these days because all my inventory is actually stored in a Google Drive. It's just a file system-based inventory interface that allows me to build objects in a region and then grab them and store them in the file system. So I've got all of my grep, LS, NV, and everything else that I need locally to manage my inventory, which was really bloody convenient when you're talking about several thousand things. And the fact that it's on Google Drive means that it's available anywhere. So it's really convenient to have that. And the other thing that we've been poking around with, which is not functional at this point, is the ability to do scene replication. Most of what we have in DSG was a custom replication protocol and it was really exceptionally hard to do. And what a lot of people were asking for was just I don't need to necessarily interact with everyone, but what I want to do is have a copy of the scene so that I can see all of the actions that are being taken. And the dispatcher interface is, I'll say, really close at this point to being able to track object updates in one scene and propagate all of those updates, including avatar movements, avatar appearances, object movement from one place to the other. What we don't have, there's the support for building and other things like that obviously is not there, which was nice for DSG work. But it's more like the replicated scene is kind of like consuming a movie. We're actually in it. So you can actually see everything that's moving around. It's not bi-directional at this point. So just on the justification for scene replication is really scale and performance. And there are also some security issues that we wanted, the military exactly, where you've got one place where things are really happening and everybody else is simply able to look at it or look at a subset of what's happening in a particular region. So that came out of some conversations with Doug before. All right. So I think that's it. Dispatchers and OpenSim, it is a region module that's currently implemented as well as actually a collection of region modules. It's really easy to extend because you define a new message in a structured object and then you just have to implement the handlers for that message. It's secured not necessarily in a way that you would want turned loose on a kind of socially focused virtual world because it's not per person. But it is, or excuse me, it's not per object, but it is per action on it. And back to the questions earlier about performance. The only thing in OpenSim that can go faster right now as far as updating the scene is the physics engine. And I think we're close to that. Even with the BISON encoding, or excuse me, with the BISON encoding and the performance that we get from that, we run out of sumo cycles before we run out of OpenSim cycles when we're driving it today with the cars. We do multiple concurrent streams with multiple threads in the OpenSim simulator. There's no intelligence whatsoever in the simulator, so there's no local scripts that are executing. That's where we get the performance is that OpenSim does nothing but pass the service as a multiplexer for passing updates out to the endpoints. So all of the computation is moved off on the backend server. And the only thing that's really running is 3D sandipulation. And that's where we get the performance is that you're not sharing cycles like the physics engine does with the simulator itself. So that's the real wind that we get out of here on the performance. And the nice thing is, I'm sorry, it's really nice to be able to use Emacs and Perl and Python in order to write scripts rather than trying to figure out how to get LSL to work it and trying to figure out how to dig through the appropriate scene object group and scene object part APIs in order to figure out where the right calls are. The wrappers for, yeah, Krista, yeah. But the wrappers for the client objects because it's just encoded JSON and Besan, adding new clients is trivial. Client language is trivial. All right, that's all I had for now. Thanks very much. Questions? As far as the current version up in Sim, this goes back to my comment about Justin changing the names on the scene object group APIs. It really works with just the Dev Master at this point. So it's been updated for, I guess, the code changes over about two months ago, Justin, something like that. Okay, I'll hang around here for a little bit, but I would say I'll just hand it back over to James and say thank you very much. It's another great conference, and I hope we continue to do this. Thank you, Mick, for a terrific presentation. As a reminder to our audience, you can see what's coming up on the conference schedule at conference.opensimulator.org. In this room, the next session will be scaling Open Simulator inventory using NoSQL with Tranquility Dexter at 10 a.m. Thank you again to our speaker and the audience. We'll be back shortly with the next session.