 OpenEO is the main thing I'm working on. Stack was just the site project, basically, that I started working on because I needed it for OpenEO. So in OpenEO is a funded project by the European Commission at the moment. And the idea behind it is to get an interoperable geoprocessing API for cloud services. So why do we do that? Basically, at the moment, if you want to geoprocess data, it's often the case that you have either R, Python, JavaScript, or whatever language you're using. And then you need to connect to any of these cloud services. They have all their own APIs and specifications for how to process data, whether it's, for example, a data cube or title-based, whether it could be downloaded as GeoTIF or whatever. They have all the different things for billing and how to store data and stuff like that. So basically, for each of these things, you need a different client. You need to learn how to process the data. Basically, if you start working with one of these services, you're locked in because you have learned that. And of course, you want to proceed to work with that because you learned it. And it's always a tough time to get used to something different. So what we think is better is to get something like this. You have one client, R, Python, JavaScript, in our case, what we support at the moment. And then you have a streamlined API in between, which then translates the things to a single API, basically, where you have a single part, how you process, like whether for us it's a data cube, how you build things, how you download things, and so on and so on. So it's basically like most of you probably know GDAL. That's the thing that translates things between the GIS programs and the data formats. And that's basically some kind of GDAL for the cloud, I guess. So that helps you to make reproducible research as you can basically take your application that you wrote or your algorithm from one provider to the other. So if you run things on Google Earth Engine first and want to know whether that's really true, what they computed for you, then you can take the code that you wrote in R and just change the URL and some other minor things for pre-processing and transform it to code that is running on, for example, Vito on the Prova V MEP or any of the other cloud processing providers that you are aware of. In that case, it's portable to some extent. And that's how we think it should be in the future that you just have a very simple access to data in the end and don't need to write proprietary workflows in for other cloud providers. So as a side, it's a language for geospatial processing. We have on the one side, the API, which is basically the translation layer between the clients and the server and a set of predefined processes, which is basically trying to make interoperable processes. So that, for example, if you compute things on Python in X-array and in R with stars on another package, then processes may slightly differ in regard how they compute things. And we try to do this, like, define it on a higher level so that you can use all the same processes for processing for all the different kinds of computation software that is in the background. This is in contrast to Stack, is focused on processing and Stack was focused on search and discovery. It's open source, so all software we develop here is open source and the specification as well. We were focusing on DataCube, so that's a bit of a change maybe from the traditional GIS workflow where you download individual tiles and process based on them. And here it's all basically wrapped into a DataCube which you can process on. And we support UDS, which is a very interesting thing because then it allows you to send your R or Python code that is not like the processes we have at the moment are very like narrow in the sense that, for example, that you don't can use custom libraries, for example, that compute some very advanced algorithm that we don't support at the moment. And in that sense, if you need any specific libraries, for example, be fast for some computations, then you can actually run it as UDF where you can basically just write a script code in Python or R and send it to the server and then it's executed in the cloud for you. So what is it not? Well, it's not another cloud provider we just specify the API and the translation layer. It's not another geo-processing software so we're not writing the new ArcGIS or something like that. It's really just a translation and it's not very much as the previous traditional GIS workflow so that you download the data then you have tiles and need to process them and so on. It's all cloud based. So your algorithm is going to the data which is stored in large amounts in the cloud and then you get the result back and put the other way around. Of course, in this part again, I can show this again here, which is basically, of course, defining a new standard and in that sense, we could run into the issue that there are afterwards 15 competing standards but I hope it's not. So the API, the translation layer in between offers the following functionalities. Of course, first it needs to give you the basic information so it's giving access to discovery things like, for example, how the API works, what it supports. The EO data that you can use in this workflow that is exposed via stack, stack collections and stack API and then the processes which is basically just a list of processes that is supported by the backend. Then it supports, of course, authentication with OpenID Connect. Then you have workflow management for where you can basically store your own user-defined processes. So if you, for example, want to make a new algorithm based on the predefined processes we have, you can store them as user-defined processes again and use them as they were predefined before from the backend. So it's really integrated into things and you can pass around your algorithms and run them on other backends or you can pass them to other users to be reused. Then there's file management where you can basically upload assets. If there's a GeoJSON file that you need to pass or something like that or whether there's things that you want to download or it's handled via central file management API. Of course, then there are the processing servers. You can either process synchronous so then you basically send the things to the server and immediately or in a matter of seconds hopefully get a response back with a result that of course only works for limited like extends and data. And for bigger things, you can use batch jobs where you can basically also send the data to the server and then wait for whatever time it needs to process the things and then get back the results to download it again as stack catalog with the appropriate files in it. And the third thing is the web services. So you can basically also there is an API to basically host WMS through OpenEO or WCS or other services that you want to expose. So we don't redefine things for viewing and stuff like that but rely on the standards that are already there and defined probably mostly from OGC. But you can also expose non-standardized thing like XYZ tiles that are used by OpenStreetMap for example. Yeah, processes are already mentioned. There is a set of predefined processes like at the moment I think 130 or something like that for bandmas, for loading data into data cubes, working on data cubes, renaming dimensions, adding new values and stuff like that. You can visit processes.OpenEO.org to see the list and then of course based on the predefined processes you can define your own user defined processes which is internally just a graph that is basically a dependency graph with instructions how to work on the data. And then there is UDFs again which is basically the thing I talked about before where you can write your R and Python scripts and send it to the server as part of the other processes. So basically you can say I use a predefined process and then load data with it and then this data gets passed to the UDF process and then you can further compute it with other processes we have predefined to say for example the data and then you are ready to go download the data. We have several clients implemented at the moment. We are tackling JavaScript, Python and R at the moment which should tackle most of the geospatial community I guess maybe there's Julia in the future as well but we'll see. We have a browser based application as well for users that are not so much into programming so that pretty much works like a model builder in ArcGIS or QGIS. Then we have a QGIS implementation where you can use it as a plugin and basically start jobs from QGIS and download it and show it in QGIS directly. And there is a mobile app that you can use as well. This is a screenshot for example from the web editor. You see these workflows over there at the top in the middle. Then the management stuff is here in the bottom. You see a list of processes and collections you can use. You can drag and drop that into the model builder. And then on the right there is a map that you can basically use to view the data. I think there is some NO2 visualization at the moment on the map. This is how for example an EVI computation would look like on that is R, yes it is R. It's pretty easy. You just connect to the web service with a URL and of course username and password then it will be prompted. Then you basically create a data cube. You can load data. In this example it's central2 again. You can specify the spatial extent, template extent and bands to be loaded. Then that will be loaded into a data cube. Then for example in this case you will reduce the dimension in bands and do some band mouth on the bands. In this case the EVI computation. And then reduce the temporal dimension to be, to just give you the minimum composite and save the result as geotiff. And then the same you can do with Python in this case. It's looking very similar. You can use the functions as in Python like the operators here are overloaded just to be used. And then they are translated into our internal representation and sent to the server. Yeah we have several server implementations already that you can reuse or extend if you want. There's GeoPySpark and Geotrellas implementation. There is Google Earth implementation so you can basically run our scripts already on Google Earth Engine as well for free. There's GrassGIS Actinia implementation. You can go to MarcosTalk at 2 p.m. to get more about that. There's the JRC Earth Observation Data and Processing Platform from the European Commission. There's an OpenStack implementation. There is access to Sentinel Hub as well. And there is a server implementation for WCPS which is in the end Rust Aman. There's a bit of ecosystem we also developed as for example OpenEO Hub which you can go to and then you get basically a list of overview which servers are there where you can process on. You can basically for example also just pass your algorithm that you implemented and then it tells you in which server you can run it. Gives you information about which data is available and what it costs and stuff like that. You can also share there your own defined processes, your UDFs and stuff like that. And then there is a valid data of course to check whether the API implementations are valid. It checks both the actual, just the structure of the API whether the responses are valid. And then also it checks whether the results that are processed are valid. So there is also a way to check whether in between the back ends there are differences that are coming from processing. And then of course when you visit processes.OpenEO.org you see a rendered list of processes which is basically rendered through our doc generator for processes. And of course you can also reuse at least for the data discovery part. You can use the stack and OGC API features ecosystem because the API is completely compliant to that standard and at such you can use that ecosystem. And if you expose the WMS of course you can just use the WMS clients that you are aware of. The state of OpenEO at the moment is that all these partners are working on that and maybe also you in the future. We currently have released version 1.0 release candidate one so we are pretty much going into stable mode now after experimenting a long while for two years with what works best and what doesn't. And the project ends in the third quarter of the year so then we can expect a stable version where you can really rely on. Yeah and that's it for my two talks. Now I need some water and then thank you for listening and I take your questions. We have some time for questions. Anyone? Can we stack and OpenEO? Yes. Yeah so regarding maintaining there are a couple of companies that are basing their work and future work of an OpenEO so EODC for example and Vito are all already like pushing things internally so that their internal users and external users are using that so in that sense they need to continue with that of course because they have clients that rely on that and there is also further projects that we want to establish based on OpenEO so I hope that will make it future proof. Regarding user base we have some use cases that are running at the moment to really check whether all that what we did is working that is a broad range of things snow cover analysis, agriculture and stuff like that but there could be more of course. The thing is like if you start something new it's hard to find people that really want to hop on a thing that is not stable yet but we're working on that and it evolves over time we also have some meteorological things in the future with ECWMF plan so yeah that's the future hope. So for OpenEO everything is licensed under Apache 2 license so that's all open source and you can reuse it to whatever extent you want so feel free to implement something or loop request it's all on github so that's good. What was the other thing? Yeah. As far as I know for most of these implementations there are dockers containers which you can run so that's the start. We're working on making that more easy to adopt at the moment most of the implementers are still like setting up their own infrastructures to get things running of course so in the future there should be more things like wagon scripts and stuff like that. Thank you very much. Thank you.