 Good morning everybody. My name is Stanislav Dawek, I'm CTO of Cloud Ferro. I would like to welcome you to this webinar. Today we will be talking about building efficient cloud platforms for processing big earth observation data. We have together with us Mikhail Schmidt from DLR, who is one of our core customers and users of such platforms. And Mikhail will be talking about the implementation of the CodD platform that we have built together with DLR. And we have, I have also my colleague, Alexander Tsassas, who was project manager on this project and on several other big earth data platform projects that we have built. First of few words about our company. Cloud Ferro is a cloud service provider, specialized in building and operating platforms for earth observation data. So today we will be sharing our best practices and our findings that we have elaborated during the build up of platforms for DLR, which is the CodD platform for ESA, which is the CreoDias platform that we operate, and for UMEDSAT, which is the Wikio Dias platform. So as for the agenda, first we will discuss, I will talk about the different challenges that we faced during the build up of these platforms. It will be mostly technical challenges. Then Mikhail will follow with the customer point of view and how it looks from the perspective of a customer for whom we implement such a platform and for whom we operate such a platform. Then Alex Tsassas will be talking about how it looks from the project manager's point of view about the implementation model architecture of the platform and the key platform. Elements will wrap up with a short summary and with a Q&A session. So when we face the task of building a cloud platform for earth observation, we first think about the challenges that need to be addressed. First of all, why do we need an EOdata platform? There are several reasons driving this need. Mostly there is a lot of data. Earth observation is one of the massive big data problems nowadays. We are talking about terabytes of data being downloaded each day. We are talking about petabytes of data to be managed. Then on top of this, when you have satellite data, most of the time you have several institutions who need to use and process this data. These are different institutions who have different needs. Their common need is that they need to process these different sets of data in an efficient manner. One of the findings from both our own studies and some external data we came up to says that users spend typical users who process your data, spend 80% of their time downloading and preparing the data for processing, not doing the actual science, not doing the actual exploration of this data. 80% of the effort is spent on downloading and acquiring data and putting it into a shape that is convenient for the processing. So one way and the way to address these kinds of problems is to have a common platform a big data platform that has a large common repository that puts the data at the users fingertips and to have processing close to this data. We talked about the why now about the users of such platforms. There are many different user groups and each of these user groups have slightly different needs from such a platform. So we have satellite operators, we have public administration, we have industry, there are different user groups. Their needs is that first of all they need the data. They need multiple data sets, both optical, most of the time you need to cross data from different sources to do something useful with this. So different data sets are needed, current data is needed, so most of the time timeliness is important about the data. The data needs to be searchable, you need to be able to easily select the data you need from this CEO of data and you want this data to be available locally so that you can avoid downloading this data. Then you need easy access methods. Most of the users, the heavy users need APIs to access this data but more casual users and users need visual tools to access the data. The common ground for all this is that users don't want to wait for downloading the data, they need the data at their fingertips. Then once you have the data, you have access to it, you need infrastructure to process this data and here the needs of the user are mostly expressed in terms of flexibility. Some users want to run virtual machines, some users even want to run physical machines. Most users nowadays want to have their applications packaged in containers so they want to run docker, Kubernetes and the container orchestration engines. The new trend and the new kid on the block is function as a service. So users don't want to take care of infrastructure at all, they just want to have their algorithm or their function applied to a large dataset quickly and in parallel. So this is a function as a service that is becoming another popular way of exploring the data. Then users mostly want to use their very often their own tools, their own processing chains that they have built over very often a very long period. They have invested in them and they want to run these at scale close to the data and the platform needs to be flexible enough to provide this. Okay, then users need tools and integration and very often the tools need to be integrated so that they are easy to access and then they need multi-tenancy and billing. If you have questions along the way, you can ask them on the chat. We have a colleague who answers chat questions and we will be answering these at the end during the Q&A part. Okay, so now what a typical EOData platform looks like. There is data ingestion. So data is ingested from satellite data sources. So there is this ingestion and acquisition part. Then there is the storage part. This is the data repository and catalogs, the part that stores the data and then there is the processing part where the data is being processed. Now when we build such a platform, there are several challenges that need to be addressed. I will be talking about just a few of these challenges, huge data sizes, how to acquire store-efficiently data which is huge. Then there is data access, flexible functionality, ease of use and scalability. So going to the meat of this presentation, huge data sizes. We have several tens of terabytes of data flowing down every day. We have petabyte scale of the repository, how to put together a platform that is performant and that is cost-effective. So first, for data storage we use a distributed replicated storage cluster. This is pretty standard nowadays, but this is the only way to go. Then we use object storage, not file systems. Since this is a technology that scales much better, file systems have inherent limitations in terms of scalability. Since the repository is huge and the access pattern is mostly about accessing large data chunks, we need to optimize for a bandwidth, not for IOPS. Generally, we don't need extreme performance out of such a large repository or at least performance per petabyte. So we need to optimize for cost-effectiveness, not necessarily for very high IOPS performance. Then at this scale, we need several features from the repository. We need high availability, we need scrubbing of the data because the repository is so large that disks or any data carriers often break and the system needs to tolerate these breakages. So it needs to detect problems and repair them automatically. Then for cost-effectiveness, we use low cost standard hardware. Nothing very fancy, just the standard stuff you can buy from any supplier. Then we base all our development on open source software. This allows us both, this is good for cost-effectiveness, of course, but this allows us to tweak and adapt this software very flexibly to our needs, which is very important for this. Then we need provider-grade networking. The best is to have such a system placed in a carrier-neutral data center with a very high bandwidth. So this was about huge data sizes. Then data access. For data access, there are several patterns and several findings, recommendations on how to store this data to make it easily accessible. So first, we store the products unzipped. Neo data product is usually composed of several files and it is very convenient for users to be able to access every individual file directly without uncompressing it or even parts of that file. Then we need to provide different access mechanisms. Some users want to use object interfaces. Some others have legacy interfaces like NFS or SIDS. They want to access using a file system and this is important for legacy users, legacy apps. So it's important to provide different access modes. Then the data products need to be provided in their original form and to ensure traceability. Users need to know what data products, what is within a data product and how it was obtained, what was the parameterization. All these details are important for many users. Another criteria is to be able to provide tiled access to the data. So users can access it with WMS, WMTS interfaces. We do this by generating the tiles on the fly from the original data products. This is how we do it. Of course, we cache the recent file. Then we need to provide a homogeneous catalog service that allows users to find the data easily. To provide this, we provide interfaces to that catalog service both through the API and the web interface. Then it is important to provide events that are able to trigger processing automatically when new data arrives. Now moving on to functionality. Users expect processing capabilities that are generally similar to what they find in the leading public cloud. So in order to provide it within a private cloud, the only way to go is to use generally open source because this allows you to be flexible, to profit from the developments of a large open source community and not to put yourself and your users in the dependence of a closed system that may be difficult to migrate from. This also allows you to provide the functionality you need at a reasonable cost. Of course, the system needs to be open, flexible, upgradable. And what is really important is to provide the functionalities to users in a service form so that these functionalities are consumable as a service. Now ease of use. Users, different users have very different needs. Some of them are very professional IT or data experts. Some are beginners. It is important to provide all the functionality in an easy-to-use way, which includes providing access to the functionality through the API and the graphical interface to closely integrate different components. Many platforms for AO data processing are not well integrated and have different access modes and different systems for storing data and for processing it. We try to integrate it as closely as possible with a common login system, with a common single sign-on and a common interface. Then ease of use is also being able to use the standard users now. We use open standards everywhere where it is possible. It is important also to provide documentation and, of course, support for the users. Support is extremely important. Many things need to be explained to users and they often need the support part. And the last part of the challenge is about scalability. Users often have an algorithm or a need that is addressed by their software and they need the platform to apply this software at scale. In order to provide this scale, we need to follow several recommendations. One is to be able to scale separately the data cluster and the processing. This is one thing. The other one is to avoid bottlenecks in storage and the data processing part. When you scale the storage to design an architecture that also scales the bandwidth of access to the storage, not to have a bottleneck that blocks this bandwidth. The other thing is to provide standard APIs to automate provisioning of the infrastructure so that users can use their own tools to scale the processing such as Terraform. Orchestrators are another type of frameworks that allow easy scaling. The platform should be able to run things like Kubernetes, Mesos, Swarm or other orchestrators which allow the processing to be easily scaled. And, last, it is nice to provide Neo Processing as a service functionality that allows users to process data at scale using either standard processes provided by the platform or custom processes provided by the customers themselves. So, with this, this would be the end of the challenges part and I will switch the presentation to Mikhail. Mikhail, please tell us about this, about how it looks from the customers, from the users' perspective. Yes, I'm happy to. Hello, my name is Mike Schmidt. I'm from the German Space Administration DLR. We can hear you. Thank you. Can you hear me? Mikhail, we can hear you. You can or you can't? Sorry, I assume everything's okay. So, I start talking. We have Code.de as a platform which is very kindly implemented for us. We have a certain need, the C in Code.de stands for Copernicus. So, it's an access portal for Copernicus data and information and you see in the background image the web presence that we have. Next slide, please. Code.de is intended to provide German users, in particular authorities and research institutions, quick access to the Sentinel data and other satellite images, but also companies who work for the authorities and research institutions. Code.de is part of the national strategy of the German government. So, it is a governmentally funded project. Sorry, I just need some notes. So, we have specific needs for Code.de, one of which being we need a national data access point for the Copernicus data as part of the European collaborative ground segment, meaning we want to enable users to get a little bit quicker to the Copernicus data. Of course, users can go through the ESA hubs, but we wanted to have another national access point. So, downloading is one thing that we wanted to provide, just sheer data distribution and searchable catalog, of course. Another thing was we needed some capacities for certain processors that are part of a public cloud, let's say so, and we wanted to have a processing environment that is private and secure by virtual machines. We also wanted to train and support a module and our timeframe was a bit tight. We needed the project, at least in the first version, up and running after six months and then the full version after 12 months. And one of our key constraints was that we wanted to have the system in a user-friendly way, easy to use, intuitive with a modern look and feel. So, we had a Code.de Phase 1 project and we wanted to build from this and Claude Ferro came into a second phase of our project. Next slide, please. So, there was a tender process involved and some of our requirements are listed on the left. We wanted to stay in the system. We know about that before. We wanted GPU access for artificial intelligence applications. We wanted a monitoring system in terms of data. We wanted to host national mission data from the German Space Agency, as well as the Copernicus data itself, but also the access to the Copernicus Contributing missions. And we wanted some, what we call convenience products, some data that are a little bit easier to digest for users. I'll show you examples in a few slides. The system that we wanted to implement with Code.de is free of charge, so we wanted a quarter management system, rather than a payment system. Inspire conformity for the data is a requirement for us, but also very key is to follow the PSI regulations for cloud security, PSI standing for the German data security authority. We need a software management system and quality control, a user management system, and we want continuously performance tests to be done on the system. And this is all part of the license prescribing, which is to the left, which Fero made an offer upon. Next slide, please. So you saw on the landing page on that front page from the website that we have designed three parts of the website. One is called data, one is called processing, and the other one is helping support. The website is bilingual, German and English. These are the German examples now. And what a guide through some of these aspects right now in the following slides. Next slide, please. So we hope it's all. We are in a big data environment here, especially when dealing with satellite images. You would have heard about the Copernicus program, that there's lots of satellites there, and there's more coming. Currently, I believe it's 150 terabyte of data globally. So there's another reason for the need for cloud computing. Next slide, please. And another big data set is the Copernicus services, where we have six different services, which are also quite data rich, and they need it to be hosted and made accessible. Next slide, please. And there's another component in terms of data. We have our user data, and the BSI has certain regulations on it. And they were described in our license prescribing and Club Ferro came up with an easy solution, saying that Code DE will be hosted in Frankfurt, in Germany, with partial replication of the data in the Warsaw Zone, essentially, to do with the previous. And with that set up, we were in a good environment so that we can fulfill all our regulations. Next slide, please. In terms of the data accessibility for the website, this is a similar setup that you will find in CRE-DS, I believe. We have the data browser, where the left side is the essential data, one, two, three, five, B. We have a few other data are part of Code DE as well, like Landsat data, and also Korean land cover data. All these data are part of the WNS service, so they're easy to browse and digest, and you can select in a convenient way different band combinations and display options, so that makes the whole part of the intuitive user experience for us at least very good, and we are quite happy with it. Next slide, please. For Germany, so in terms of the data, we have defined a little box around Germany, where we want to hold all Copernicus data for the entire time period since the collections began. And on top of that, we wanted, as I said, some convenience products, so we said explicitly we want sensor to data, myer process, so that is one of the public processes, another one is sports, a second collection of an atmospheric processor. Next click, please. There should be an image appearing, so this is a monthly composite of the myer process, it was very correctly corrected images. This is a little too much from the early area of the link. This is a convenience product that we wanted to have this monthly composites, which got very implemented for us. Next slide is next click, next image, is a monthly composite from backscatter data from Sentinel-1, same area, so these are products that we wanted to provide the users for ease of use with. Next slide, please. This is the finder for searching data and also, of course, to use it in your operational environment once you found the data with an API interface. Next click, please. We have organized the data in three different catalogs. One is the CoteA catalog, and roughly described it before in the browser environment. To have access to global Sentinel images, we have the linkage to the grandiose repository and it's a separate collection. We have the preparedness contributions, which can be downloaded from ESA itself. Next click, please. Within the CoteA catalog, for instance, you see there are the Sentinel imagery, as well as Copernicus DEM. There's a Teresia X-Catamel, the data itself holds it in over 50 more from DLR. We also have modern data available here, as well. Next slide, please. You know, portfolio on the website, we have a data description, nothing surprising there, so you should or could look at that in the implementation of CoteA. Next slide, please. I mentioned that we needed a quarter management system, so we have four different types of users that we identified and four different types of quarter that are managed through the help desk and then are enabling the user to use the virtual machines and the different flavors of operating system. Next slide, please. Processing, I just mentioned the virtual machines already. Next click. So virtual machines are the private environments for users. Users can upload their own data, develop their processes and routines and develop their products, which they can also then share through a WMS service with other people. Can you click further? And this is just an LS, a listing of the Unix-based terminal. The upper little window is the CoteA server and through another folder, the lower image shows the data on the CrateA's repository, which are visible to our CoteA users with a single sign-on mechanism. So you can write your own code, you can just adjust your folders and you can use the data. You may also see the Copernicus services there, the CLMS is locally hosted in Frankfurt, as we envisaged that to be heavily used and the others are in Warsaw. Next slide, please. Just to show it, we heard before Docker is installed. This is the header world from the Docker world as implemented on CoteA and ready for our users to use. Next slide, please. And then we come pretty much to an end. Another installation that we have another interface for users to use and access the Copernicus data is the Jupyter Hub, where without further login, other than the general CoteA login, users can explore the data through the Jupyter Hub. Next slide, please. Here's an example then of a bit of a long script how to display an NDVI image for a certain region. And the next slide should be my last one. Here's the website for CoteA and if you have any further questions, you can send me an email or we can do that in the discussion later on. Thank you. Thank you, Mikhail, for this for this customer point of view. I think it was extremely interesting. And now I give the microphone to Alex as us. Okay, so everyone, you already we have we have heard what are the general or very specific in parts challenges and requirements for building platform for for processing and storing cache observation data. We have heard DLRs or German authorities approach to building such platforms, what was important for them and what we have delivered. And now I want to give you a very short insight on how we went from a request for from a customer to the point where we had a running platform in six months, which was a big challenge for for the whole team, both in DLR and CloudFab. So first of all, we had or DLR had a nice and well designed platform built in phase one, which which offered good web portal, data browser and the processing environment for to the users. It was operational, several thousand users registered and using using that, but it had had its limitations, which we wanted together to overcome when building the face to platform. So when when we approach the project, we we set us some key design objectives apart from the objectives Michel has mentioned as as DLR requirements. We we looked at what we have and said that what you want to do is to maintain the functionality so that so the users are not thrown into a completely new world with nothing they could they were doing possible in the platform. Second, we wanted to migrate the user base so the users can continue using the platform, even though it's a it's a new one. Next, we wanted to minimize the service interaction, meaning mostly the generation of the of the convenience products and additional information that's available on the platform, because the phase one platform was producing the data all the time. And it was it was it was required that once we switch over to the new platform, the data is available and the new data as as the as the production from satellite is coming is also the next was to modernize the system to use the the more recent technologies, because phase one was already a few years old to extend the availability of analysis ready data so that so the data which is not a row earth observation imagery, but something preprocessed with features extracted and the indices calculated to integrate it with the Deus ex ex ecosystem to make the best use of what has been already built by the commission and Tisa and make the the design as much feature and future proof as possible. So the user and the customer DLR and us can build on top of that when the new requirements come. So we have started working in plan to work in six six basic steps. First, the review of what was there and the requirements then to map what was running on the platform and was required into the architecture we have designed for earth observation platforms and analyze the data offer requirements and the requirements for the integration with other sources and systems. Develop what is needed to develop an integrate, populate with data, migrate the users, launch a platform and then begin the real work which is support the users and evolve the system we have we have built. So before that the review of the documentation was not was not enough then there was of course tons of of of good documentation on the existing design but we also needed to understand a bit deeper how various things work, how some data is generated, how some metadata or information in the system is built, how the processing process looks like. So we needed even to dig into the code of code d phase one to understand what is critical for the future development. The code d platform is built on something we have designed as a common architecture for the ground segment data processing and storage sites. As Staszek has already set in his part, it starts with the storage and acquisition of data then it is processing the data on demand and systematics or the creation of of additional products from the from the satellite data then unified indexing and access to the data. So these are three sections or three parts of such platforms which we which we which we think fulfill most of the requirements of ground segment data distribution and processing methods. So we took the code d phase one design and tried to identify similarities and things that can be easily mapped into our own design. So first we took the whole data ingestion of acquisition processing storage part and mapped it into our own ingestion ingestion system making sure that the same metadata the same information is collected and exposed to to other systems. Then we looked at the available interfaces and also made sure that the interfaces we deliver WMS data access the diagram might not be readable. I am aware of that sorry for that but in the in the version we share you will be able to read all those slow letters. So believe me that these are similar things marked in red blue and green. The common part was also the user interface. Of course we needed to redevelop or redesign it to to come to answer the requirements. And the the part that was that experienced the biggest change was the processing part because the phase one code d used partially closed cluster environment for processing and some part of public cloud for additional users users processing but it was not flexible enough. So we adapted to our design which uses a common cloud infrastructure to do both processing of data and the users users environments. Looking at the data Michel has already said about data offer. So what we wanted to make sure that everything that can be offloaded to the big storage in Warsaw the crudiest storage and is not critical for Frankfurt is accessible for the for the code d users. That gives the users ability to access worldwide data but without the need to use a lot of of or occupy a lot of storage in the in the in the code d main site in Frankfurt. So it increases efficiency of data is a lot because as you can expect users of code d are interested mostly in Germany but from time to time they need to access global global global data. Second we need to we needed to develop some custom components which are which were specific to code d but thanks to our design which is modular and scalable we could do that. So we developed custom processors that create the convenience products Michel has talked about and we also created some additional functionalities like getting the data from the Copernicus contributing missions and of course we have developed or redeveloped user interfaces so they are adapted to to German users and and focused on the area of Germany. So in the end in the end we have created a set of user interfaces including the portal browser finder the cloud management dashboard that you have and last the the data cube management interface. So once we had the software when we had the data processed or or or ready to be processed and what is important to say we have been able due to the to the to using the same architecture we were able to very quickly process the whole spatial and temporal coverage of Germany using the big capacity of the big cloud in Warsaw even before we have installed the servers that the code d platform is using. So we have deployed we have deployed storage which which is altogether more than two petabytes and available available data space for both the repository and the users user storage and currently more than two two two petabytes two terabytes of of available RAM to to for the user cloud processing environments and for the platform internal processing. So once the system was deployed the users migrated which we managed to do overnight so the old platform was disabled or disconnected on 31st of March and the new platform was operate fully operational from the 1st of April this year. So the users started coming and currently we have 1300 active users with almost 4 000 users registered in the systems however some of the users that were using the previous platform did not did not might decided to use the new one we hope they will come back and the data ingestion continues we add new collections we provide support in German and English with almost 300 support tickets already served and every month brings a new feature or new functionality to be implemented. What are the lessons learned from from what we have been doing? First of all we have we have we have we have confirmed that the design we have prepared and the solutions we built to answer the challenges of earth observation processing platforms have proven in both dream field deployments with which we have done before like the previous platform and currently the LTA platform and the ground field migration platform which in which we have not we were we could not design everything but we needed to adapt to the design that was already there to the functionality that was already there so that means the solution we we've built the tools we have created are flexible enough to accommodate most of the needs of of the users with to be honest very little modification required as we were able to do it in six months all together. The standard tools we we serve we have served all the needs of of earth observation data processing because what the platform can do is it gets the data from the source which can be even very raw data from the satellite that then it can process it in real time to any kind of product possible then it can store extract any information feature and present it to the user with the ability to for the user to process it locally and publish the results further so it's a whole chain from the acquisition of the of the information from the orbit up to the use of the of that information in a in an easy to to understand form by the end user. Also the cloud cloud-based processing is suitable both for systematic data processing so the same infrastructure the same the same the same computing power can be used to generate products systematically so we create the whole data sets and collections like the German mosaics for example and also at the same time it provides the user's ability to work deploy the containers and do whatever they want and last but not not least the open source solutions we have chosen we have adapted we have fixed at times and have proven to be reliable and scalable for the large scale deployments. The code the cluster is growing we have already consumed most of it and we will be planning together with DLR on the further upgrades and the the Warsaw zone which we call the the cluster available in in Warsaw has already passed its threshold of 20 petabytes of of data stored in the stored in the repository and I don't even remember how much how many ritual cores we make available to the users. What is nice also the the the public is recognized the effort we made and the and the work we have done together with DLR and we were awarded for this work with Polish-German Economic Award and awards from the Polish Development Agency so it's not also not also an internal success so we are happy that we made it and it works and the users are are happy to come and use it but also also other other experts think that that we did a good job. So thank you Alek for this we will be wrapping up we have a lot of interesting questions that have been asked on the chat before we proceed to the questions I will give a few words about the current and future development so one thing we are working and so this is what you may expect from us in the coming months we are working on a processing as a service serverless processing functionality for the platform since we think it is very convenient for users to to process in this mode it will be a function as a service oriented and designed for earth data earth observation data processing another thing that we have that is already operational in the platform is the earth observation the smart cache for data so you can for instance download and acquire high resolution data from external platforms and have that data stored in a in a common cache which is currently over a petabyte and this data this data cache is free for users by free we mean it's free for users who use virtual machines or other processing on the platform they may make use of this cache another thing that we are working on is better access to very high resolution data with different new data providers coming in with very high resolution data and another area we are working on is dynamic data cubes so this is the current and the future developments from this I would move quite smoothly to the q&a part we have got many questions we will try to answer all of them if we don't answer them right now we will answer them by mail or offline with the users who posted them and I will just pick a few questions that we came across one of them is about new this new modern functionalities I was talking about during the presentation such as Kubernetes swarm etc so some of these functionalities are already on the platform I mean Terraform which was the question was about you can use Terraform with the open stack connector right now on the platform and we indeed have a few customers who use it in a in an extensive way doing dynamic provisioning Kubernetes and swarm can can be deployed manually right now but it is inconvenient for users and a supported version of Kubernetes will be coming by the end of Q1 next year so in order to do this we need to upgrade the open stack version we are running now so we will be doing precisely that and and we hope to make this available on the on the platform another question that was asked was about the the kind of infrastructure that can be used on the to process these sizes of of data so this question was partly answered by the slide that Alec has shown this one precisely this is generally how we build our our infrastructure we use large storage nodes with with the huge 8 or 10 terabyte disks and 30 of these in in one node and for the storage cluster we use we use sef which is an open which is an open source distributed storage cluster and within the largest infrastructure we run which is cryodias we have over 160 storage nodes right now so it's it's really a huge a huge infrastructure another question we that was asked was about the catalog what what yes you can answer this one yes we to be honest we are using multiple cataloging tools the principal one is is our custom built trotac database which allows us to manage all those acquisition processes and and data management and from storing the metadata we are using a tool called resto which is open search open search compatible api there are several others for other purposes but these are the main yes and resto is an open source however heavily modified by us heavily modified by us yes exactly and i think that we had a question that that was i think we would like to answer yes about that maybe there was a question regarding the dm beta in code d we have several dm collections there's a 30m dm which i do not know which is what is resolution and the copernicus dm which is 30 and 90 meters depending on the on the sub sub collection so you can you can go to the portal and look at the data offer and there are detailed descriptions of of the data sets yes and there was a question to you Michael do any german public administration entities have some kind of special access to code so if you would be so kind to answer this there's no special access in that sense same same rules account for everyone the german administration they can apply for certain quotas and and use them and other than that it's just the normal website and the api api interfaces so no no special treatment there okay thank you so i i believe i have already answered the question from please to regarding the product catalog how long did it take to develop code the phase two as i said six months from from kickoff to to the lounge uh oh next one interesting question how do how do user generate value-added products are made accessible to others including visualizations so there are at least several paths first first one virtual machines and the cloud environment is connected to the internet so the user is free to set up any kind of data publishing application or or utility and share the results with with the with the world or other users and second all the data that is properly structured and can be properly indexed can be published on the cloud code platform itself as part of the of the data offer this requires work together but it's possible and and and it it is already being arranged with with several several connections what the i don't know how many questions you want to ask to answer more i think we are running a bit out of time okay so we will we will we will collect the the rest of the remaining questions and answer them for all of you to to wrap up on this and summarize we are very glad you came in such numbers to this to this webinar we hope it was interesting for you if you plan on doing any more or any more research on that or if you plan on especially to you plan on installing and having an observation data processing platform if you have such a project we will be happy to discuss it with you to share our insights you can you can contact us and and we will set up a call with you and we can we can discuss those aspects so i will end up and have to leave you with for the end of this presentation thank you once again and i'd like to wrap up with the follow-up okay so thank you very much for attending the webinar thank you very much especially michael for for joining us and sharing your thoughts and presentation with us i i i hope having you gave our participants a more complete view not only on the technology and the platform but also on the needs and and the requirements users or or the the customers may may may have would you like to say something okay no i'm i was happy to be part of it thank you thank you very much so it was a great pleasure to to host all of you and during the webinar i hope you hope you like it please contact us by email or visit our website visit the kodi website and i'm pretty sure the lr stuff will be we'll be also happy to to talk to you about about your ideas and or needs so please please please visit kodi please visit kodi us please visit cloud ferro and i will hope i hope we will be able to help you with your in the future thank you