 Hey everyone. In this presentation, we will discuss about migrating Prometheus data between different storage systems. Before we start, a brief introduction about me. I am Hargishan Singh. I am an active open source contributor to Prometheus upstream. I live in Bhubaneshwar, India, and I work as a software engineer at TimeScale. I am maintainer of PromScale, which is a high-performance Prometheus data storage built on top of Postgres. That's about me. Let's begin with the topic. Our motivation behind developing a migration tool for Prometheus was that at present, there are 27 officially listed remote storage systems for Prometheus. However, there are no good ways to migrate between them. There can be a lot of reasons for migrating Prometheus data. For example, privacy, high cardinality, scalability, etc. The lack of proper migration tool leaves users with bad choices. They are forced to throw away data in old systems or run through systems parallelly, the old and the new one, or they don't go for any changes. This is an example of vendor lock-in. That's why we created PromMigrator. PromMigrator supports a wide range of Prometheus-compliant remote storage systems. Here is a compatibility table of different storage systems with PromMigrator. Limitations are of respective storage systems and not of PromMigrator. Please note that with backfill, we mean pushing data to a storage system that already has a webstream data than the data being actually pushed. Let's see how it works. Consider a scenario of migration. We want to pull data from the storage on the left to that on the right. PromMigrator pulls data in form of consecutive slabs. Each slab contains data in form of a time range. This time range increases with consecutive slabs by a minute. As you can see, after pulling a slab, PromMigrator pushes it to storage on the right and at the same time pulls the next slab from the storage from the left. This is how data is migrated in PromMigrator. Data migrations can be from a few megabytes to several petabytes. PromMigrator knows that migrations can be memory intensive. For this reason, PromMigrator aims for a target memory usage in such a way that there is a perfect balance between speed of data migration and the visualization of memory. It follows an addictive increase of time range when below the target memory region and multiplicative decrease when the memory usage exceeds the target region. When within the aim target region, the time range remains constant. Let's understand this better with the graph. We start with the slabs from one minute and go with an increasing order of time, time range. And as we reach target region, the time range is constant. And if we exceed the target region of memory usage, we do a multiplicative decrease, which is by half, which is five minutes. And then we again aim to be inside the target region. PromMigrator can gracefully restart after a failure or interruption. The ability to restart is achieved by pushing a max time of the last slab as a separate time series. This max time is fetched when the magnet starts the next time, which can be after a crash. And it treats that as a starting point of current migration. With this, we achieve a goal of completely stateless working model that tracks progress interactively and can resume migration process in case of a failure or interruption during the migration process. This model has better control on memory in runtime and can migrate faster using concurrent pulling and pushing off data. For more details, we have further links to our demo video, design doc, GitHub repository, and readme. If you want to try out the tool, please visit the download page over here. For easy written information, you can go to readme.org for migrator. Thank you very much for your kind attention.