 I'm presenting here ZTSDB, a time series database management system for our users. The high-level goals are to handle many incoming time series potentially from different sources, to get real-time access in R where the statistical analysis can be handled best, and to delegate the heavy lifting to the database. The main concepts of ZTSDB, first and foremost, it's a no-SQL database. Instead, we provide an R-based programming language that's used for queries, for programmatic access, for backup, copy, transform, distribution, and the automation of all this. And we also propose usual R structures, and in addition an n-dimensional time series and nanosecond resolution temporal types. It's got immediate and symbiotic access via R. It can access live data with little lag, and it allows transformations in real-time. So, for example, the mean minute, the median minute, or custom function can be applied to incoming data. It's got time-specific operations, so it's possible to do a subset on a complex set of time intervals, or combine time series via alignment. And finally, it's got seamless connectivity between database instances. So, this means that any database instance can execute arbitrary code on another database instance. So, the connection to a ZTSDB instance is very simple. One loads the interface package, and then one creates a connection by providing the IP address and the remote port. And each such connection creates a new remote context, so it's a multi-user database. And that same command can be used between ZTSDB instances. The R interface implementation leverages R's lazy evaluation to be able to write those queries unquoted inside an R session. It overloads the query operator, that question mark operator, and the arguments to that operator are the actual queries. They're not evaluated by R. Instead, they're passed locally, and the whole abstract syntax tree is sent for evaluation to a remote ZTSDB instance. So, in this first example, we have the expression 1 plus 1. It is passed and sent for evaluation, and obviously we get the result 2 back. And in this case, it's the whole assignment that will be remotely evaluated, and then it's possible to retrieve the result of the assignment. For other example of queries, we can assume, for example, that we have a list of time series that exists on a remote instance, and we simply want to pick a specific time series within this list and display the last six rows. We can also bring the whole time series back into R and assign it to a local variable, and then we can manipulate this time series locally in the R session. It's got seamless data exchange, so it has an escape operator that forces local evaluation of part of the query. So, in this first example, this second sys.time function is going to be escaped and evaluated locally, and the result of this evaluation will then be bundled with the query and sent as part of this whole expression to be evaluated remotely. Here, as another example, we send a local matrix to be appended to a remote matrix, and here we append a remote matrix to a local matrix. Persistence is implemented with Linux memory mapped files, and the types Matrix, Array and Ztsdb can all be declared as persistent. That's fairly fast, so it takes about eight seconds to retrieve one billion 64-bit floating point numbers from disk into an R session on a standard laptop. The distribution is a lot of flexibility. It also can be hierarchical, and that's allowed because there is push-pull possibilities between each database instance. So, here's an example where we have a set of database instances that are working as data collectors and transformers that can be potentially filtering or throwing away part of that data or making aggregations that are then consolidated on two database instances that can then be queried by an arbitrary number of R sessions. The status of the project is alpha, it's not production-ready, and any testing feedback and criticism is very much appreciated. And finally, further information can be found by following these links.