 Artivity. Artivity is a collaboration between the University of the Arts of London and Simeo Desk and we've been lucky to receive funding from GISC. The problem that we're trying to address is this. Since the development of conceptual arts and performance arts what we have discovered is that the process that ends up to a network is as important as the artwork itself. In a gallery context of course you do not see the process, the method or the research that happened before the artwork you only see the artwork. So typically, and excuse my textbook example, when you look at Jackson Pollock painting if you don't know who it was, when it was, what was happening at that time it is impenetrable, you don't understand anything yet. A more recent example is this artwork by Gino Ballantyne which again is very difficult to understand if you don't know his work. So the solution. What we do is we try to capture the process. Historically people have done that by visiting the artist in the studio, taking interviews, observing them and then writing up a text and communicating what they think was happening. Sometimes they had a camera with them, they set up the camera, they press record, they ask the artist to do something and they capture it. Okay, these are good things to do but actually they focus on the making event. They do not really capture the process, the method. Artists have methods. They do things, they solve problems. So it has been methods like that to record process have been criticized. Now in a digital art landscape, things are slightly different because the production of the artwork, the research that led to that production and all the communication and everything, they all happen in the same computer, in the same box, so we can capture the technique, methods and the output itself. And because it's all on the computer we can program it and we can automate it which is important. Artists don't want to spend time documenting their work. They want to do it, they want somebody else to do that stuff. The idea for activities started from Nepomac. I don't know if you are familiar with this project but it's the idea of the semantic desktop whereby all the interaction of the user with the desktop and different applications and data, they all can be modeled in one framework and recorded. That led to the Zeitgeist project in GNOME which some of you may have followed. But basically the technology behind it is RDF, the resource description framework. It's a generic format to store any kind of information. You have a subject, you have a property of a subject, you have a value of the property and with these three things you can capture everything that is going on on your desktop. Since Nepomac, other projects have proposed new models and new ways of capturing this information, we are using one of them that's called the W3C provenance ontology. I'm not going to go into detail but we can discuss that later. We gave a laptop with activity running to Gino Ballantyne, one of our artists who is testing it. He dropped it back two days later. I was expecting two or three files there. We got 40. This numbering up here doesn't correspond to any logical sense. Files have been delivered, so files, dates are all over the place. We didn't really know how he ended up building that artwork that I showed you. We run a query to tell us that. We've got activity data in the background. We can run a query and get that sequence. This is what happened. He started off doing linear drawings. Then they became a little bit more aggressive and then a little bit more stylized maybe. He started doing a lot of copying and pasting to a larger scale. He created these columns or linear drawings. Then maybe started mirroring other abstract shapes and this one as well. He brought all of these things together into this file and produced the artwork. Artivity can tell us how the process, the technique, how the file got all together. That's one thing. It still doesn't tell us what it is. Another query. Why don't we look at the browsing history and the files that Gino was looking while producing that artwork? This is what he was looking at. He was looking at manuscripts. Manuscripts are typically arranged in columns like that one, for example, or there. Perhaps what he was trying to do with this copy and pasting to create the columns is to produce a manuscript. If that is a manuscript, it means that the linear drawings are symbols. Not letters, but symbols. Another piece of information. He downloaded and used this file as a background to his artwork. This file is an empty musical score. It's what musicians used to write notes on. It's everywhere in the background. We've got the linear drawings as symbols in front of the musical score. Perhaps these things are actually music. We've got musical notation. Musicians write music all the time. Why did he have to do a new one? This is the next clue that we get. The fact that he was based in the Chelsea College of Arts studios alongside 20 students preparing for their show and these studios, when 20 students are in there, are very noisy. This is not music. This is noise. Perhaps what Gino was doing is creating a musical score of the noise in the surge of the creativity in the college at that time. This is what activity does. It gives us an interpretation of what has gone on in the artist's mind. Potentially we can answer questions about the development of an artist artwork or a group of artists or even of a whole domain. Imagine we're having data over 10 years, lots of data. Who cares? Like I said, artists care because they don't have to do the documentation. The computer does it for them, so that's important. Art historians care. Imagine in 10, 15 years' time they will be writing about art history and digital history today. What will they be doing? Will they be looking for archives? That's what art historians do. Where are the archives? On the computer, but we're not actually recording this stuff. Of course, we could consider all sorts of other things about technique and so on. But for this community, we've got various user groups, but for this community it's perhaps important to say that the activity can be used to track the way that artists use your software. What features are more popular? When is it that people press undo? Why do they press undo? What can they do at that point and it fails? Perhaps we can use activity to understand how creative software is being used. All these things would have been ideas in my mind if it wasn't for Sebastian and Moritz to get involved in the project and transform into something much bigger and much more important. I'm really grateful to them for that. I will let Sebastian now talk to you about the more technical stuff behind activity. Hello, thank you for the nice presentation. Is this the most? Yeah, well, you can personally. So, it's Moritz and Sebastian for me. We're from Simu Desk in Augsburg and we've been participating in the activity project to actually implement the stuff based on zeitgeist in the beginning and we moved on to have it based on RDF and Napomuk so that it becomes more flexible than zeitgeist. We figured out zeitgeist is pretty limited. So, what I'd like to talk about is just to give you a quick overview of how the application actually works, what there is and what it can do and a little bit about the software architecture and abstract. And I'd like to talk about what we're going to be releasing in the next couple of weeks in the last phase of the activity project that is being funded by GISC. So, in the current release, which is primarily developed on Linux and based on the GTK toolkit, activity is nothing more than a recently used list. So, since it records transparently in the background, you're presented with the files that you've been using and you can access the details of the files by just clicking on it and it will show you the region of the file that you have modified does show you per editing session how many interactions there have been and it does, of course, show you all the details and the data that has been recorded. So, it's a very simple user interface because most of the artists are not very familiar with defining complex queries in SparkQL. That's where the real power is. But it's simple enough to have an impression about how stuff works. So, we also added the possibility to export data into RDF format, of course, and CSV to have it imported into statistical application and being reused in office and wherever. So, and of course, as Tarnas has already said, it's not only recording of the artistic data of the programs but also the browsing history because that is the context that belongs to the picture. So, we have browser plugins that currently are very simple. They allow you to disable or enable the capturing and once they capture, then all the stuff that the browsing history is recorded. And then, later on, you can start ask queries which pages were visited during some editing sessions and five, ten minutes later on and so on. So, that's what the activity is currently presenting to the user. It's very simple. Our platform support is, as I said, rather limited to Linux currently. We have integrated it into Inkscape and we have a plugin for Krita and we have plugins for most of the web browsers that are common. I have to note that Inkscape is the application that we started out in the beginning because it's a vector application and it was most simply a simple one to get into. The problem is that it doesn't have extensions. Soon it will. The architecture of the current release is very simple. We have, on the bottom, an RDF database that's open link to other. It's the same one that was used for KDE, Netomock KDE, which is pretty capable. We slimmed it down a little bit so that it doesn't have to run as a fully-fledged server. Then we have a very simple HDTV REST API simply for the reason for the browsers being able to talk to activity because the Inkscape plugin API has been disabled and currently plugins are being written in HTML only. There is this native GTK GUI, the activity explorer that can directly talk to the open link virtual also database. In the upcoming release, we are doing some fundamental changes and we'd like to move the project huge steps forward. One of the biggest features is multi-platform support. We figured out that many artists use macOS. If you want some real data, you need to go where the artists are. We redesigned the activity to work across all platforms. That's why we switched from a native GUI to a web GUI which directly talks to the REST API via JSON. That's a very nice step because turn out there's pretty good statistical JavaScript libraries and image processing libraries available which just makes development of the GUI much quicker than using GTK and we have much more possibilities there. On the other hand, nothing else really has changed because it was pretty good, so that's the new architecture. For this phase, the new features is one of the biggest ones that we have not yet started which will be very good, I think, is the video recording. We will allow to record a single frame per editing step in the video so that we can also capture the image data and have a visual recapturing of the whole process because right now, activity is pretty abstract and all about data but once you have that data along with the image and see how the artist actually worked it will begin to be much more powerful because then you can do bitmap analysis and color analysis and lots of nice things. The other thing is that we are working on publishing methods to being able to publish the results to repositories such as e-prints which are common in academics use and being able to allow the artist to add comments and notes because they might be helpful in understanding what's going on. Zero minutes, okay, I'm finished. So, our new platform support, we go to the other operating system and we also added support for the Adobe software seed but that doesn't belong here. You can find this at Bitbucket and we are preparing a new website that will be launched pretty soon. If you have any questions, just ask me Jantanasis. Thank you very much.