 I'm Bob Walker, I'm the project manager for the Maglev Project at Gemstone with me on the stage today is Alan Otis, principal engineer and we're going to go kind of in reverse order to the folks that were just before us. I'm going to talk a little bit about status and then turn it over to Alan for much more technical dialogue. You know I've been thinking recently about the evolution of object-oriented languages that's taken place over the last couple of decades and it started dawning on me that there's really been no analogous evolution in storage mechanisms. You know it's in 20 years ago 30 years ago you'd be writing your code in procedural languages and you'd be writing to read data from you know tape or a vSAM file or a relational database into a flat basically explicitly defined record layout where you had to define you know every single type, every single length you know count all the bytes and it all had to fit in that place. Well quite a long time has passed and we are still doing that with our objects. The tabular schema of the relational database fit merged quite nicely with the idea of you know explicitly defined areas of memory really pretty easy mapping but as object-oriented software has evolved it's it just doesn't quite fit anymore. You know you need to go get the state from the relational database and assemble your objects out of it and if something's changed and it needs to be written back to the database well you've got to kind of deconstruct that and push it back to the database and what has that led to? Well that's led to mapping frameworks and mapping frameworks have been around probably as long as object-oriented programming has been around and you know there are plenty of them there are some very popular ones today that do the job you know fairly effectively from what I understand although you know mapping frameworks don't come without issues there's there's a whole spectrum of them you there's code complexity somewhere there's additional CPU overhead as a result of the mapping that has to be done there are constraints of having to force fit your object graph into fundamentally a tabular structure and there are issues around translating relational associations into object aggregations I mean there there's a whole litany of these things. Now no matter how deeply hidden or pushed down in a mapping framework the actual mapping mechanisms are there's still a fundamental mismatch between the object model and relational schema it's still there and it impacts the way that you think about designing your classes the way you design your applications no way around it so why do we continue down this road well for one there's a lot of relational data out there and the need for mapping to and from it isn't going to go away anytime soon right once somebody once told to me data sticks where it's thrown so it's the truth but what if you could use a pure persistence by reachability model instead and do away with any need at all to flatten out your Ruby instances and stick them away in a database you know just dispense with the mapping layer completely you can if you use maglev maglev gives Ruby the ability to transparently cache and store potentially huge complex graphs of objects without the constraints of OR mapping in maglev your classes define your model pure and simple the model is the scheme of the application no need to you know create any kind of n-ary mappings of any kind if you just don't have to do that you don't have to define cardinalities associations you don't have to think about that ever you know unless of course you you well jumping out at our point what this gives you is less complexity less code you have to maintain and it gives you the freedom to think in a purely object modeling frame of mind without having to pay any attention whatsoever to you know is this really going to map well into my relational database and for you know for for relatively small object models no problem if you get into something that's hugely complex graph of objects that's where the mapping into relational database really starts to become painful when you start having to you know custom right you know outer joins in this sort of thing to reconstitute object graph so with maglev you can continue to use your object relational mapping frameworks to access legacy data it's not required you know of course we understand in a heterogeneous a world of data there's going to be a lot of places you might get your data from you might do a W get you might look it up from an old v-sam file you know who knows now part of our vision of course is that anything written in ruby will run on maglev and that includes your mapping frameworks now if they include see it might be a little bit longer but we're working on that and you'll be able to use access your relational databases from a maglev application just as you would in any ruby application so that's kind of my spiel on our mapping in maglev I want to move into giving a status of where we are today we've made a lot of progress since we last spoke to the community at large the persistence of ruby objects is working very very well you know it should the underlying technology has had it working well for for quite some time we've been mostly focused over the last three months on getting the kernel and core libraries implemented and implemented correctly because we really feel that if we have that done right the standard library will really start to fall into place now I'm sure there will be issues along the way but nonetheless that's that's the approach we're taking we've also been doing a lot of work Alan has in particular has been doing a lot of work I'm making sure the virtual machine and the bytecodes are correct and that we're parsing and compiling bytecodes that are executing you know by the criteria of the ruby specifications microbenchmark execution time remains stable we certainly haven't lost any ground on the microbenchmarks we're in some cases a little bit faster than we were say four six months ago these are due to optimizations that are being made you know the low hanging fruit along the way we haven't sat down and done a thorough performance optimization pass you know it's just not the right time in the development cycle to do that there are still a number of unknowns in front of us and we will do that at some point we are starting on the standard library and as I said earlier if we feel we feel we've got the core library implementation done right a lot of the standard libraries should just run theoretically we'll find out how are we going to know if it's done right well ruby spec right ruby spec tests are going to tell us if we're behaving the way we should if we're erroring out or if we're failing some of the specs and we've been running those really continuously and back in august I started tracking our progress on this and the one and only slide that I have here is you said the down page down key is this one now remember these are uh uh the core and language we haven't started on library yet so back in august this is where we were we were passing 1809 you know it was an achievement at the time it was great 1809 of the expectations a number of them erroring out because we're still getting our error error we're still starting to throw the right errors in the right places some of them we're bubbling up out of the underlying technology and coming out as something different than what the spec would expect but the specs have been tremendously useful in guiding us to where we need to do things in september you can see we took a pretty good jump up in terms of what we're passing in october you know yet again a pretty good jump and the goal is you know there's something like 15800 and change expectations I think currently in the the core and the language suite and it's not that we are failing to run those it's that what we are running against is a more outdated version than what's current up on the ruby spec project one of the key things we'll be working on as soon as we get back next week is is taking care of that so that we've got you know the current version of m spec and the current versions of ruby specs running right now next slide so we're not there yet of course but we've made I think tremendous progress in the kernel and the core implementation we're continuing continuing in that direction we're totally committed to adhering to the specs and you know hope to start feeding into that process as as our core and kernel stuff gets more mature now over the next few months we'll be completing the core libraries and filling out the standard libraries we've got a design effort underway to determine what the usage model for the persistent domains should look like to a ruby programmer and we are you know soliciting opinions from a lot of different people and I'd be really interested in hearing your thoughts on what you would expect from a ruby object persistence basically the way it works right now is you can bind any kind of object into a class variable let's say a collection of some kind and you can stick all your instances in there and anything that they can reach will be persisted in the repository by reachability you go in and dereference to one of those guys and you commit it it's garbage it's really pretty simple let's see we have we need to start adding in support for deployment and migrations and you know tools for for use of the product so we still have a ways to go then the news today that I'd like to deliver is that we are very close to having an early alpha release and we are going to give it to you know a fairly small group of people who are a interested in and you know be kind of have some insight into what we're doing we'd like to give it out to everybody but you know our bandwidth is kind of limited if we did that it would be so overwhelmed by responding to everybody that that it would be kind of hard to to get our regular work done however there will as we go forward and start to widen the scope of access to this will widen will open up an alpha release to a much broader audience will open up a beta release in even larger audience and you know somewhere in that timeframe will have all of the Ruby sources available you know for check out via get repository and we will need to work out how you know we feed that back into our our own processes but you know once this is out there you'll be able to code in in the Ruby code that we've written your Ruby code that we've borrowed the small talk code if you care to dig into that it'll all be there where you'll be able to see it I think I've already gone over time so let me let me close out by saying there's still a couple of other things we we're going to start soon establishing a channel for communication with community we've been so focused on just sort of getting this thing ramped up that we haven't been able to pay a lot of attention to that and we have a lot of work to do around things like Ruby gems and rails I mean that's still a ways off but you know fundamentally we're focusing on the core language and getting the language and the libraries implemented and in such a way that you can you know use Ruby object persistence you know outside of the rails environment rails will take a little bit longer anyway with that I think I've gone over my time so I'd like to introduce Alan our principal engineer and thank you all very much so is this working okay I hope the microphone so I'm going to talk a little bit today about Maglev's execution technology as you may know we started with our with gemstones mature small talk product which includes both a small talk vm and object persistence and we've taken the strategy of compiling small talk to or compiling ruby to small talk byte codes and adding new byte codes as needed and we we're reusing quite a bit of the small talk class library but in a controlled way so that unless it's explicitly coded in a kernel method that you're transitioning from ruby to small talk you won't actually inherit all the small talk methods automatically so what does object persistence look like we have a a maglev vm here and you can have many vm's on a given repository we consider the disk objects to be the repository and there's a shared memory cache so that multiple vm's can share you know disk ios basically what does that mean to the programmer basically the very simplest model you might have for persistence use case would be to have a top level instance of hash which is persistent which is actually the way the small talk persistence looks there's a very top level hash which has reserved object identifier to always keep it alive and that's committed when the database is initially built and then you can add objects to that hash and if you say commit the objects reachable from your added entries will become persistent so if you were to access some element of that hash the objects on disk would get brought into the shared memory cache if the if the disk pages would get brought in if they weren't already in and then the objects would be copied out of disk pages selectively into the object memory so this is a copy on read design so that a persistent object will be in memory where it can have a memory address and look very similar to a temporary object so then if you create some new objects and make some of them reachable from your persistent objects the objects you modify over in the persistent area will be put in a dirty list for you automatically by the vm and those dirty objects will be transitively closed to basically make the new objects also go to disk and that's sort of how the object persistence works now the rest of the talk is mostly going to be about the execution technology and so fundamentally we're getting our speed from the mature small talk vm and what makes a small talk vm fast is that it's basically executing very simple bytecode operations and those are simple enough that you can make them fast so the return from method bytecode for example assuming you're running with our native code generator is for machine instructions the message send if you're coming through a part of a message where you've a part of a method where you've executed once before and so the cache for that send has been loaded a message send with a first level cache hit is 20 machine instructions now that send is after you've already pushed the receiver in arguments it's a stack based machine so you've pushed the receiver in arguments and then the actual send bytecode is what gets you 20 machine instructions so 20 machine instructions then gets you to the first instruction in the method that you're invoking we're also trying to keep memory footprint small we have a good compact and garbage collector and we're trying to minimize the number and size of objects and make the bytecodes match the language now how does ruby compare to small talk to for someone who's you know a vm implementer well there are fortunately quite a few similarities we there's no strict typing in either language there's the dynamic message lookup you can add and remove methods to a class dynamically strings and arrays are variable size objects the gemstone dialect of small talk has strings and arrays directly variable size they're both essentially single inheritance as opposed to c++ they have exception handling blocks and fairly similar base classes now what are some of the differences probably the biggest difference is ruby has variable number of method arguments whereas small talk each method declares a fixed number of arguments there are keywords like break next redo and retry and small talk doesn't believe in the notion of a go-to now small talk does does support return from inside of a method or block that returns from the home context so that's similar still ruby has the notion of the super keyword but it's somewhat restricted whereas in small talk super is very is basically the same as self except for where the lookup starts that was kind of a surprise to a small talk programmer like myself the difference there ruby has modules and small talk is more just a straight inheritance hierarchy and ruby has what we think what we call dynamic instance variables whereas small talk the instance variables are fixed at the time the class is defined we'll talk a bit about how we deal with that small talk has behavior as a common super class of both class and meta class and all three of these are you know first class access to the programmer and ruby only has class so we've had to do some work in that area now what do I mean by small talk super being different well this is an example of a class env and we're using an instance of this env class to implement the global object uppercase env so we decided to just make it a subclass of hash since it's supposed to basically have a hash like protocol so I did this and then I went off to implement basically the instance of lowercase env is going to have as its elements a cache of all of the elements from the programs environment so if you say at colon I want to first look in the cache values and if it's not there do a get env call to the operating system to see if maybe it is there and then refresh the cache and so to accomplish that this is small talk code here I do a super at which says look in this instance of env using the square bracket accessor essentially if it's not there then do the operating system call and this is a small talk primitive method here if I got something non-nil return that otherwise I'm going to update the receiver of this at colon and what I want to do is send at put to super but you can't really do that in Ruby so I had to write this method in small talk and what about our compilation technology currently for the alpha release and prior to this point we've been using an MRI 1.8.6 vm running as a a parse server process and we're using the parse tree gem and so over an HTTP connection we request the file to be compiled and get back a string containing all the s expressions and that then gets comes into the maglev vm and we'll talk about what happens next so in the maglev vm we send an HTTP request to the MRI vm specifying a path to this our ruby file or possibly a string for an evaluation we get back the s expressions that goes through a small talk s expressions parser stage which produces a tree of an ast tree in small talk object memory we then go through some more small talk code which produces an IR tree and this IR tree is the same tree that is produced by our small talk parser the small talk parser is actually written in C which is how we bootstrap the whole system and then we have a bytecode generator written in C that generates instances of this gs n method class which is our small talk compiled method class and then on demand we can turn those gs n methods into machine code with a very simple native code generator so what does the bytecode generator which is this stage here what does it do it's doing some very simple bytecode optimizations based on the stack machine concept you know if you're pushing the same we're trying to take push pop or pop push sequences and convert them to store loads to reduce memory references removing branching directions deleting unreachable code we take method and block temporaries from wherever they're declared and move them to the innermost block possible to reduce up level accesses and we delete some unreference temporaries so all the decisions about where a method temporary is going to live are done here and we'll get to these variable context things in a minute so we have two execution modes in our vm there's an interpreted mode which is executing bytecodes using a hand coded assembly language interpreter we actually write our interpreters in m4 for partial portability so it's not too hard to you know move them between cpu's and as it turns out you're only using a very small percentage of the real assembly language instruction set for a stack machine maybe you know 10 or 15 percent and the the interpreter supports both breakpointing and single stepping for the purposes of debuggers our native code system so an execution is either all in interpreted mode or all in native code we don't do mixed mode execution so the native code would be generated on demand the first time a method is invoked um so what do we do with the native code we translate each bytecode to machine code and it's a very simple code generator no inlining and it has a one register stop top of cache state machine complicated bytecodes would be jumping to uh shared codes that uh we call stubs that are emitted when the native code generator is initialized or they may call into c uh primitives written in c within the vm the native code gets us two to three times improvement over the interpreter uh you know it's not a tremendous amount and like i said we're not doing any fancy inlining or anything but the main message here is that for ruby people building new ruby vms it's not strictly necessary to have native code to get pretty decent performance if you get the bytecodes right so what does a message send uh look like um as i mentioned small talk has fixed arguments to each message send and we want to try to keep it that way for performance so except for a call or yield sent to a block all of our ruby message sends are actually fixed number of arguments um so for example in a small talk if you're sending at put to an object then you're looking during the method look up for the symbol at put and because of the way small talk selectors or mess method names are work you know there's no ambiguity ambiguity about how many uh arguments there are so the mess the method look up infrastructure in the vm can just do an identity compare on symbols to figure out if it's got the right key in a method dictionary that also makes the method the caches for the method look up very easy to implement because you really only need you know two words a symbol that or you know really you only need a you know in a lot of the caches you only need one word for the key which is the symbol uh so how do we do the ruby variable number of arguments well we have what are called bridge methods and we started off a while back and we did an analysis of the small talk seaside web framework and discovered that 98% of the methods in that framework have three or fewer arguments 95% have two or less so we're compiling all uh ruby message sends to conform to a maximum message message signature of three fixed arguments an array and an optional block which gets you like 16 possible variations and yes that may cause a lot of extra methods to be generated but first of all most of them are very small and in our system the unused methods can live on disk where they won't occupy memory in production um so what does a ruby message send look like so here we have a class which is defining a store method uh and that store method will have as its signature colon star so we create an instance of that class and then uh this is and then we want to call store with one argument this store will have the signature store colon and so at runtime that will resolve to the store colon bridge method which will push the first argument on the stack push a nil to match the required second argument and then invoke the actual store method um so here's a three argument invocation so the bridge method there will throw away the third argument and then invoke the actual method and uh if we create an array and then invoke that the bridge method for stores star will take the first element out of that array and pass it to the declared argument b and so forth and the the uh the design here is that if you know the basic method message sends are fast enough the extra layer of the bridge method will be reasonable and not cause too much performance degradation and what about uh method contexts uh you heard from the rabbinius uh team how they're doing method contexts and rvm is a little bit different in that we don't actually have method context objects uh with basically the machine views each method as having a stack frame and we take a stack memory comes from an mmapped memory region with a guard page at either end that's protected for no access with mmap so by doing that we can we can implement stack overflow with a seg v handler and not have to do any stack overflow checking in the byte codes um we do have things called variable contexts which hold the argument the the temporaries for example in a method which might be referenced from within a block they'll the variable context will also hold copies of method arguments that might be referenced from within a block uh the variable contexts are normal objects living in normal object memory and just normal garbage collection and they're created and accessed by special byte codes so that the decision about where the method temp is going to live uh is done by the byte code generator and this design for example supports seaside continuations today and uh so we fully expect it'll support ruby continuations also um how about our object memory uh well it's it's also allocated with mmap we're not using the sea heap and on linux and solaris there's this flight mmap called map no reserved which means you can allocate say 100 megabytes of memory and the only the parts you actually use will be allocated out of real memory um apple a mac os 10 doesn't yet support this flag so we wish apple would support that it would make things more efficient uh the uh we have a generation scavenger with a card marking style remembered set uh and the right barrier for that kind of a remembered set is about seven instructions machine instruction so and the right barrier for member is if you're storing a reference into object a that is going to reference b and b is not a simple thing like a fixed num but it's a real object you have to make sure the garbage collector knows that a might reference a newer object um it turns out this right barrier is very you know important for performance once you get start to get the byte codes running fast then this can become a bottleneck if it's not uh fast enough uh so we have every so often if the generation scavenger hasn't made enough progress then we'll run the full compacting mark sweep and that will use mmap again to shrink the virtual address space uh within the overall memory our current design has a maximum temporary object memory of about one gigabyte per vm and this uh you know it's nowhere near as sophisticated as a garb it's not as sophisticated a garbage collector as the java collectors but for the you know several hundred object several hundred megabytes type object memory it's it's very competitive so our in the object memory we have a three word object header um and this is a 64 bit vm uh it doesn't run 32 bit uh so we have in there in the object header we have a class pointer one word worth of size information and format information and then a pointer possibly to a persistent object table entry or which will be null in a temporary object uh and we directly support variable size objects um so in small top there's this thing called become which lets you swap the identity of two objects and uh because we have to support become from our legacy uh that means we need to have this notion of forwarders and it turns out if we have these forwarders we can easily implement variable size objects so we've had variable size strings and arrays for a long time in small talk and we've carried that forward and the forward so if you have a string and it has to grow the original string gets turned into a forwarder the first a pointer slot in the body will then point to the larger implementation and then that gets collapsed by back to a single larger object by the garbage collector later on a special object so small talk has this notion of some classes defining special objects uh for example uh small talks notion of a fixed num is called small integer and that this is one of the things that gets uh performance in a small talk vm um so a fixed num is just one word within a containing object or within a stack frame and there's no actual load on the garbage collector when you create one and the way that's done is by tagging the object identifiers so down here you can see uh the way we so the bottom three bits in a 64 bit object identifier are a tag specifying what kind of an obstetrician pointer that is and I have that in the water there uh so you can have a memory pointer to a non-special object like a pointer to a string you can have the tag value one says what you have is an object identifier of a disk object now that disk object might or might not be in memory but if you go to fetch that instance variable the object manager will uh probe the pers the in memory table of persistent objects if the object's in memory it'll give the address back otherwise it'll fault the object in for you on demand so that's how some of the object faulting works or it could be a fixed num uh it could be something we call a small double or it could be a true true false or nil uh or an instance of the small top class called character these are distinguished further by higher higher bits within the object identifier uh we talked about variable size objects this is I believe a fair benefit because it reduces the memory footprint of of strings I mean if you don't have variable sized objects at the lowest level then a string or an array each is going to have to have two object headers probably which in our case would be an extra 24 bytes for that second header it also makes it easy for us to write c primitives for heavily used string and array methods so if you're coding inside of our subset of c that we use for writing primitives you can easily grow an object or just store into an object and have it automatically grow for you now how do we do the dynamic instance variables uh so here's a class that first of all when we're compiling a class you know everything between the first occurrence of the class and the closing end all of those instance variable references we will assign fixed offsets to and so that means each of those instance variables will cost one word in the body of an instance so we've compiled the class uh so far and we've created our first instance of it and then stored invoke this setter method then we extend the class with some more methods which uh references another instance we hadn't seen before this instance variable store will be a dynamic store and basically what we're going to do is we're going to grow that variable size object and for the dynamic inst far we'll put in a key value pair after the last fixed inst far so uh assuming you don't have very many dynamic inst far in an instance usually we'll just do a sequential search for the appropriate key either symbol which is the name of that inst far on a lookup if we don't find the key then we'll create a new and a new pair on the store if we had an object with a large number of you know very large number of dynamic inst fars we could we could sort that uh array of pairs so that we could at least do a binary search for finding an inst far now what is a small double it's a it's a subclass of float and this is the way it's going to look in ruby um and if you say uh if you send name to an instance of small double we may just return float to you to try and sort of hide the existence of that class but basically instances are special objects and they're a c uh 8 byte float minus 3 bits of exponent so if you have a numeric operation that would a return of float but the exponent was is within uh 10 to the plus or minus 38 roughly we're going to return a small double and save the overhead of creating a full object so all the numeric floating point primitives will deal can handle interchangeably small doubles and floats and we're just building off the already existing uh small talk numeric classes here um so reusing small talk code um we've renamed some classes in ruby for example and really other than minor changes to the methods we don't have to do very much for those other classes we've subclassed off existing small talk classes which lets us reuse existing code so for example hash we've made a subclass of an already existing small talk uh key value dictionary class and we only had to add about you know 250 lines of small talk and 250 lines of ruby and i think we have most of hash implemented with that um now we added something to our small talk vm called message environments so if you think of a class as having a method dictionary we change that method dictionary to be an array of method dictionaries and the the environment environment number is an index into that array so if you're doing a message send from small talk which is environment zero you're not going to see any ruby methods by mistake similarly if we're running in ruby we're only going to see what was stored into those environment one method dictionaries so we're trying to present a pure ruby view to the ruby programmer and not have small talk stuff leak into the ruby environment unexpectedly so if you ask a ruby class what methods it implements you'll only see the ruby methods now if you think about hash one of the things it has to do is um well i'm jumping ahead so in order to call from ruby into the small talk implementation we have this primitive keyword uh that we use during our bootstrap process so in hash we say primitive and this means i'm not going to generate bridge methods for it uh there's another variant that does generate bridge methods um and so we say the square bracket store accessor is going to map to the small talk method at put and since they both uh both take two arguments we don't need any bridge method um so this creates an entry in the environment one method dictionary with the key being square bracket equal it's actually square bracket bracket equal with two colons after it and then the value in that method dictionary will be the small talk compiled method so this particular entry is like a gateway from environment one to zero now once we're in small talk we're not going to going to inherit any ruby methods but sometimes we need to so for example in the uh so ruby hash here is the small talk name for the class hash um the hash function method wants to take a key figure out the hash function for that key and then do a modulo table size function um to figure out where to probe the hash table for that instance and so i need to send the hash method to the object but have invoke the ruby implementation of hash whatever that is so we added a small talk syntax extension uh at ruby colon which uh lets us say this particular send of hash will be sent to environment one and that's how from within small talk we call back to ruby um right now we're not expecting uh you know customers or application developers to be using this that much right now this is intended for our internal use but you know sophisticated customer could make use of it if they wanted to and that that's the end of the talk uh questions is there any sort of application story for that um so i would assume there must be right in terms of uh you mean the central stone server where the right there must be application for distribution distribution uh so the distribution for the in memory cache is that a cache like model where the object has exactly one hole uh so okay so let's go back to this slide uh each uh the shared memory cache is a shared memory segment on a machine spanning possibly multiple cores uh or multiple processors even on a you know an smp machine and any vm on that machine can take advantage of that cache we can also have another machine uh typically on a land uh that will have its own shared memory cache and that shared memory cache will uh piggyback on top of the cache that's on the primary server machine so we can have multiple layers of caches to support bigger than a single machine so the object read faulting would go you know from the disk into the closest cache and then potentially across a land connection to another cache and then each vm has its own private copy of object memory similar distribution uh the story for the disk well uh we do have some distribution product products in our small talk uh uh product we don't have i wouldn't say we have a you know a good transparent replication design yet for the ruby product that probably is still you know an exercise for future development i'd say that's something that's on a roadmap you know questions i guess he answered all of them wilson you got to have a question i know there's one there i was toying that he had to ask you what your future parser plans are uh well obviously we ask away we obviously don't want to don't want to run an MRI parserver forever uh all the performance of that is not as bad as you might think but um the uh one possibility uh is to try running the uh uh ryan davis's ruby parser written in parse in ruby uh once we can get our ruby uh you know compatibility up high enough we we hope that might be one possible approach given that given that is 3.0 parser was was kind of just released we haven't had a lot of time to really look at it yet but one one option is to run that parser um and then adjust our uh piece of the part of our code that's taking x expressions in would be adjusted um another possibility is to write a parser in small talk which would be a lot more work probably you you would definitely be able to see that um oh steal it oh steal it yeah well i don't know well you have to ask montee about the whole life of the small talk if the parser was written in small talk you'd have to be able to compile that small talk to whatever bike coaster vm rand to be able to use it but you say you didn't have replication but what's the story uh we can uh let's see we have uh i believe the customers well first of all we have transaction logs and so you can uh you can have a second system running in what we call warm standby mode continually reading transaction logs off the primary system we also have customers that are using operating system level disk replication rate of some sort or another to replicate the disk uh you know replicate the files on disk that we're managing basically you know rate 0 plus 1 if one of the disks fails you yank it out and it recovers if you take a power hit so this the the disk objects are all transactional with the transaction logs if the power fails the system will automatically replay everything up to the last committed transaction when it recovers well it's uh it's uh it's an isolated view model it's what we call so when you there's a begin transaction method and that gets you a stable view of all the objects in the system and it's basically uh you can do explicit object locking if you want to place a right lock or a read lock on an object uh but by default we're doing what we call optimistic concurrency controls when you go to the point of trying to commit we'll analyze changes since you began and if there are no conflicts let you commit and then update your view to current uh you get an exception back you have the opportunity for uh we have uh small talk code and classes that are called reduced conflicts you can write collection or dictionary classes that will automatically um like abort a selectively abort in addition to the dictionary and replay that using the current view right and and we we plan to expose some set of those to ruby you know really early on because obviously you need to have a you know some kind of reduced conflict map um to be able to have multiple acts multi-user access to it so if you don't use explicit lock locking then yes the reduced conflict is how you deal with the conflicts i mean if it's not feasible to replay then you'd have to abort and just reapply the operation from the beginning yes curious about what the proposed mechanisms are going to be for getting data out of the shared persistent storage and into say my enterprise data how do you bridge that will that be similar to the mechanism used by gemstone s well yeah i mean we do have object to relational uh mapping support in the small talk product i don't know that we've really uh just bob bob might want to talk about that i can i can talk to that a little bit um you know we will be supporting active records so if you're you know talking about needing to access uh you know relational data from a ruby environment uh you know you can certainly use active record to do that um you know there are a number of scenarios where you might say you want to pull in your your objects from your relational database and forever after you know keep them in maglev uh and then there's the other there's the round trip where or the other way you know the direction where you really do want to stick that stuff out as an extract into a large data warehouse or something like this um and i think either one of those is feasible the one thing that you really uh that kind of surprised me a little bit and looking at this some time ago is there is what really isn't an xa protocol in in ruby so you know when you get into two phase commit land um there's nothing in the language that i know of to date that would support that could answer your question to me it's it's an absolute requirement that that you know active record uh uh is supported or you know other other data mapping technology you know data mapper um and you know i would think that some of the folks that that write that code would be interested in um you know talking to us about how uh and we are interested in talking to them as well about how it fits into maglev and making sure that they run i mean it'll take a little bit of time but um you know absolutely like i said earlier there's a lot of relational data out there and you know we understand the need to access it wow okay well thank you very much for coming