 Hello and welcome everyone, transactional remote ride and why you should care. I am Harkishan Singh. I am from Bhuvaneshwar which lies east coast of India. I contributed Prometheus and I work as a software engineer at timescale, building and maturing prom scale which is a promcule compliant post-course based remote storage for Prometheus. Let's get started. This presentation will cover the following topics. We'll start with remote storage in Prometheus and then understand the architecture of remote ride today. Then introduce the upcoming feature transactional remote ride and then learn about more where we can look additional places for more information. Let me begin with what remote storage is in Prometheus. Prometheus allows storing data on a remote database. The remote storage provides advanced functionalities like high availability, multi-tenancy, thereby making an environment which is silent, scales better and offers very large retention periods. Let's see the architecture of remote ride today. Prometheus TSDB consists of right ahead log or wall which is basically a sequence of Prometheus script events which is called as records or series and samples then head jumps and blocks. The TSDB contains wall watcher which watches the most recent segment of the wall and streams the upcoming records regularly to the remote component of Prometheus. The remote component ensures the data so that it can be sent parallelly to the remote database. The limitations of current approach on the remote storage side is that non-atomic commits inaccurate histogram evaluation and sending metric samples and metric data in a different request. Let's see non-atomic commits. Prometheus creates data from a target or an exporter and then saves in form of atomic commits but the request sent to the remote database contains data which does not have any sync with the atomic commit which Prometheus is doing for its local database. So, as soon as the remote database gets the data, it commits it. Understanding inaccurate histogram evaluation. Imagine you have a histogram metric with four series and Prometheus shards them to send them parallelly to the remote database and say the first parallel request contains one and first in the second series, the next contains the third and the third request contains the fourth series. And since imagine a case where these two requests are sent and the third request is yet to be sent and by then you're executing a histogram quantile query both on Prometheus and on the remote database. The accuracy can be affected because the same query is included on two different databases and the remote database does not have the third request at this moment. And you're using transaction remote write. The main idea behind transaction remote write is to modify the remote write protocol to allow storage systems to commit scrapes atomically. Understanding a scrape, all data which is pulled at one time from a single target or exporter is a scrape. The challenge was that the current remote write system does not pass down enough information so that the remote system can know which data belongs to which commit, atomic commits. The solution to this was to send all the samples of a particular scrape in a single request and if the particular scrape is very big, take them down into different requests and send the start and end markets according so that on the remote storage side it can build the request on its side and then commit it atomically. The current status of the feature is in review phase. Once the concern maintainers approve, implementation will begin. Learning more about transaction remote write. If you want to learn more about transaction remote write and get more details or see the discussions happening for this feature, you can go to this link or in the Prometheus Dev mailing list you can type this topic and get the... and see the actual discussion. Thank you very much.