 Спасибо большое, что вы меня покажете. Моя идея, что я переделал для ФОЗДАМ, это чтобы вы знали, что прогресс, это очень хорошая релация в датаборе, может быть использована как тоже, как ноу-сквел датабор. Я знаю, что не много людей понимают это. И в этом небольшой разоке, я пытаюсь выразить вас, чтобы использовать прогресс, как ноу-сквел датабор и проект. Вы знаете, что элифант, тотом животом прогресса? И тронг, здесь, выяснил, что инфинитив Торус это датабор Джейсон. Так что эта фотография имеет значение, что прогресса встретит Джейсон. Джейсон это главная часть ноу-сквел. Так что, по-моему, я прогресс-сквел-майдер, прогресс-свел-датабор. Я начал прогресс-сквел-майдер с 1995-го года, и я астрономер в Москве. И также, я CEO России-позрес-центрик-компания Позрес-профессионал. Так что мы предлагаем поддержку, мы разработали прогресс-сквел-майдер, и так далее. Здесь есть корресс-сквел-майдер. Так что моя главная formотивation вкомпания фул-тех-соч пист perfume в таком виде Джейсон Б х-стор и еще sorted Panchado, country Xi NoSQL is a major feature, it's not like fresh feature, it's a major feature. And three third concept is NoSQL Postgres is fast. And NoSQL Postgres has good roadmap and the five is all you need is Postgres. So if you, if I have no time, you already know the content. So the Postgres is cool, very simple. We have so many forks and extensions, open source, commercial extensions. They are available because Postgres has very liberal open source BSD license. It has extensibility inside, it's very friendly to the forks. It has very good source code, so people enjoy develop their own features in their forks. And all together they create the ecosystem of Postgres. And this ecosystem covers all wide application from the old TP, MPP, all up cloud services, GIS, stream data, time series data. It has, we have support for GPU. And in a couple of days in Moscow, we will present the new database based on Postgres. It's a Criderium, it's a blockchain enabled Postgres. So with this blockchain enabled Postgres, you can verify, you can prove that your data is not compromised. So Postgres is very attractive for the new ideas, new developers. And I welcome to our team. Postgres is mature. Actually, I and my colleague, we develop HStore data type, which is the binary key value storage and index, with index support in 2003. In 2006, it becomes a part of Postgres SQL. And two years ago, two days later, JSON appears, standard. And after four years later, Postgres got JSON data type, native data type. It was textual data type, but it was the first relational database, which understand the importance of JSON. And two years later, in 2014, we developed a JSON-B. People sometimes, some people say it is a binary JSON. I would say that it is better JSON, because it has a lot of functionality. It has binary structure, indexing, and so on. So you see that HStore is a blue line and red is a JSON-B. And Google Trends show the clear picture that JSON-B becomes very, very popular. And so in Postgres, we have two JSON data types. Just understand that the JSON is a textual storage as is. Just textual storage. Not good for processing. It's good for storing and retrieving. Binary JSON, or better JSON, is a binary storage, index support. And this is what I recommend to use. Unless you need duplicated keys, you need to preserve the order of objects and so on. But in practical cases, you need JSON-B. So SQL Foundation finally recognized the importance of JSON after eight years. And in the end of 2016, they released a new SQL 2016 standard, which now has a JSON data handling in SQL. So you see the SQL standard recognize JSON. It described the data model, the SQL JSON functions, and the path language, which is most important part. They introduced path language, which let you ability to navigate through JSON structure and select the parts. This is very, very important feature. And in Postgres, we started development of this SQL JSON standard here in Brussels one year ago, and we use no new data type. We use the same data types we already have. JSON and JSON-B. And we introduce new data type JSONPath. It's a data type for SQL JSONPath language. And nine functions for constructing a JSON object and retrieving. Also, of course, we add some more methods and some operators for JSON-B and JSONPath. And SQL 2016 Path language specify the parts, or you may save the projection of JSON data to be retrieved by PathEngine. And this data comes to the SQL functions, SQL JSON functions, and SQL JSON function somehow proceeded and returned to the user. So this JSONPath is a binary data type for SQL JSON expression, and we use it for effective query JSON data. Here is example of JSON query function, which uses a bolt. I don't know if you see this. This is a bolt expression in the JSONPath language. So $ denotes the context item. Actually, it's JSON. And then you can specify any member of array floor. And this question, this is a filter. So we filter all floors, which has a level more than one. Then we choose any apartment, which satisfies this filtering. And you see that we use some variables. And variables can be passed from the function. So passing 40, as minimum known. Very, very useful, very flexible language. It allows you to manipulate with JSON data almost infinitely. You have a lot of freedom. Here is example. Example, OK, you don't see example. But believe me, this is a JSON, which describes the house with two floors. Here's maybe better to see. This is a visual representation of JSON. You have floors, you have apartments, you have metadata of apartments. And, for example, if you want to choose apartments of floors from 0 to 1, an apartment from 1 to last, OK. You see this is a result of this JSON pass expression, the green. So from the whole structure, from the whole JSON, you easy select some sub-part, sub-branches. And another example is more difficult, more complex. It includes two filters. And the result is just one metadata, the number of apartment. This is example you don't see. And very important that JSON table. JSON table, it's similar to XML table. So JSON table allows you to present JSON data as relational table. And so you can use it from, you can join, you can join JSON data with relational. This is very important, very flexible, OK. So SQL JSON support indexing. It supports the same indexing for JSON B. But also we use JSON pass to specify which part of JSON to index. You know, the JSON can be very greedy. It contains a lot of garbage. A lot of data you don't want to index. With JSON pass, you can easily specify which part of JSON data you need to index. So we are working on the patch op class for parameters. So now you can specify parameters for indexes. In the case of JSON pass, it's called projection. And we specify which part of JSON we want to index. And the result is very clear. 33 megabytes versus 292 megabytes. So before, currently, you have to index for the whole JSON. But now you can specify which part of JSON. You can specify even several paths to index. And we are working on this patch and hopefully it will go to PG-11 and certainly to PG-12. And SQL JSON availability. Currently, it is under review for PG-11. I hope that Andrew Dunstan will finish the review. It's a very big patch. You can play with SQL JSON already using web interface. So we create a web interface. You can play, try different queries and so on. And technical report is available for free. SQL standard you should buy. But technical report you can download. Compression. You know that JSONB is a FET data type? Keys could be very long. In JSONB, key could be 256 megabytes long. And people love actually. They use very long meaningful keys. And in case you have a long keys and some short values, you have a lot of overhead, you know. So dictionary compression helps. And for PG-11, now it's a custom compression API is under review. And we use it's custom compression. We create a dictionary compression for JSON. A dictionary compression helps, really helps. Here we compare the sizes of table. For green is a wiretiger. It's a snappy compression for MongoDB. This compressed JSON. JSONB. This is uncompressed JSONB, uncompressed JSON and relational equivalent of JSON. And we see that JSONB compressed and relational equivalent looks very close. So compression really works. Yes. It's not as good as MongoDB JSON compression. But it's still okay. The Red is a compression file system for Postgres. It's our proprietary product for Postgres. And it's certainly much better compressed. Okay. No school is fast. Fast is because we run, we tested YCSB benchmark. It's a benchmark for no school databases on very huge machine. And we got result that in most practical cases, Postgres will match faster than MongoDB. The performance of Postgres degrades in the high-contention rights. This is a problem, known problem. But you got it when you have a higher number of backends. More than 100. For the practical case, I don't understand. We use Postgres and MongoDB. No, no, no. Postgres uses JSONB, our stuff. It's not my school, yes. So for the practical case, for example, if your server has not more than 48 cores, you have no problem at all. All this problem goes when you have a, as in our case, 72 cores, 143. Okay, it's up. So here is how we solve these built-in pull connections and some subsequent index. And this is a road map. The road map is that we started from HStore with a high-performance data type, but low standard standard. Then we go to JSON, low performance, high standard. Then we go to JSONB, high performance, good standard. And now we are going up, up, up. So we are now following SQL standard, preserving performance. And the last slide is all you need is Postgres. Thank you very much for the talk. Questions directly to the speaker, but offstage. Yeah, I'm here. Thank you.