 Hello everybody, I'm very happy to see you all here and to have the TRIPLA F Showcase starting today. What I will do is to talk you a bit about the present TRIPLA F implementations within large scale infrastructures we serve at the Göttingen State and University Library to support research and especially the digital humanities. I'm leading the research infrastructures program at the research and development department of the library here and so what I will show you a bit what we are doing here. But for a start to indicate the importance of TRIPLA F in this interdisciplinary field, we can look at the most important international conference organized by the Alliance of Digital Humanities organizations and search just for the term TRIPLA F in the conference books of abstracts and if you look up over the past few years, we can see here starting at the age 2013 conference, there is an increasing number of hits from 0105 to then really 22, 28 and in last year's conference in Mexico, 46 mentions of TRIPLA F in the books of abstract. Here at the Göttingen State and University Library, we started in 2013 working with TRIPLA F first in a project called Shared Canvas Implementation and launched our first TRIPLA F image server and corresponding presentation API for the tax grid infrastructure, which I will talk a bit later on also in 2014. So since then we are steadily increasing the use of TRIPLA F framework in our projects and just to present you the latest one, it's a project focusing on really outstanding material. The project is called Maps of God. It was recently launched and is going to use and extend our TRIPLA F implementation for preparing annotations on large scrolls. To encode and annotate the cabalistic tree in its variety of manifestations is the goal of the project and together with partners from Israel, we are going to prepare the platform for the topographical encoding and the semantic annotation of the large parchment scrolls and also smaller forms. And to set up a database like this, we will use of course a combination of states of the art technologies and TRIPLA F really plays a major role here. We can build upon the infrastructure and experience that we have developed in the last years and which I will present you a little bit. We provide digital infrastructures especially but not only for the digital humanities here at the SUB and within two national joint ventures we started early on and continue which are Dariya DE and text grid. First on Dariya DE. It's Germany's contribution to the European wide collaboration between universities, memory institutions, computing centers and so on for a digital research infrastructure for the arts and humanities. And Dariya DE operates a repository as a digital long-term archive for human and cultural scientific research data and this is a central component of a whole research data federation architecture which aggregates various services and applications. And of course for image objects we are basing it's based on TRIPLA F. It uses the TRIPLA F APIs for the publication of image objects. The presentation API is fed by Aura RDF metadata and collections metadata entered via the web application. And on websites describing the published data we serve image previews via the image API as in the following example and it's really any kind of images from research data that we feed here. This for example is a visualization of the network of publications and the field of computer vision and pattern recognition stored in the Dariya DE repository. It could be any image. The Dariya DE repository is a more generic version of the text-crit repository which we started first and recommend for textual data like digital scholarly editions. Text-crit offers a long-term archive to explore textual resources and their facsimiles were the help of TRIPLA F via the digital image server and the presentation API is prepared by converting MatSmart or TRI files and uses a specific metadata schema based on FRBR. Currently we are hosting 459 manifests for the presentation API and our image server provides access to 183,000 resources as we are usually not ourselves. The content provider it is of course up to the community to increase the number. Currently there are seven projects taking advantage of these functionalities. Besides others the domains covered here are history, code ecology and German literature and an example can be found in a pretty new publication where TRI data is available. The TI file is equipped with links to the facsimiles and for these files the HTML representation in the repository offers a link to the manifest and to a few in Mirador too. The text-crit repository is supplemented by a client software, the text-crit laboratory and together they are forming a virtual research environment which provides serving tools for preparing digital scholarly editions. Text-crit is made for text-based humanities and stands for interoperability by design so obviously that's why the TRIPLAF implementation for the repository really started as early as in 2013 and addressing the whole research data life cycle many other tools are offered preparing the data and I just picked out those who are TRIPLAF related tools from that tool chain. We support several ways to ingest data to the text-crit ecosystem all well documented and for some formats we are replacing file references by URI during the ingest so as to make them referenceable over the web. The image server is available to the client as well and to prepare and combine the resources and we have a text image link editor which is ready to connect textual data and XML files with sections of images via shape marks layered upon the image. With a graphical user interface you can then select shapes on an image and connect it to an XML resource. We provide this tool since 2011 already and use TRI files to store the information and now the next step obviously is moving this tool forward to open annotation and TRIPLAF which is now to come and colleagues of mine will present a prototype in their talk here in the conference on thirsty, combining TEI and TRIPLAF in a virtual research environment. So really we are building on TRIPLAF as core for all our research infrastructures and all our projects and I can tell you you can take advantage of all what has been mentioned before it's great work and great fun with TRIPLAF in the research and digital humanities community and thanks for listening.