 Let me introduce myself. So my name is Dennis Magda, and presently I take role of product manager at GridGain company, and at the same time I am PMC chair of Apache Ignite project that belongs to Apache Software Foundation. So what's the correlation between Apache Ignite and GridGain? So Apache Ignite is open source solution, open source stuff that available for everyone for free. But at the same time, we at GridGain as a company, I provide you with the enterprise version of the product. So GridGain actually is built on top of Apache Ignite. So we add a special set of enterprise-level features, and can do professional support for you, and maintenance releases, so everything that is needed for big production use cases. But in general, a lot of people who come to us, they are fine to start with Apache Ignite. Okay, so how is this talk related to Postgres to you guys as a Postgres users, DBAs and developers? So this is what we are going to answer throughout this conversation. But in general, I just want to cover one of the use cases. So Postgres database is designed to be a database that is running on a single machine. And presently, there are a lot of companies that trying to solve so-called horizontal scalability task. So when the amount of data grows exponentially, and if your single-server machine, Postgres database can longer handle all this amount of data, you need to do something with this. You need to either scale up, purchase in more expensive machine, or you have an option to scale out. And Ignite, Apache Ignite, is one of the option you might consider for your production use cases. I'm not going to cover all the, so Apache Ignite is in memory data fabric. So in general, is a distributed key value storage, when that in general is a cluster of machines interconnected with each other, and every machine stores a specific subset of the data, or in other terms, a specific chart or partition of the data. And on top of this cluster of machines, we do support a variety of different APIs, like computations, basic key value queries, and plus what should be interesting to you as a Postgres users and developers is our Apache Ignite SQL Grid component, which is NC99 SQL distributed engine. And I want just to spend 20 minutes of our time covering the theoretical part of the SQL Grid, and then I plan to switch to the practical part. I want to show you a simple demo. I have a Postgres database, pre-installed on my laptop, and the plan is to connect with this database, import a scheme with all the data into the cluster, into the Apache Ignite cluster, and to execute SQL queries on top of these cluster of machines, which will be running locally on my laptop as well. So, later on, we will review how distributed queries are executed in Apache Ignite, how do we support distributed email operations like updates, inserts, and deletes. Also for you, it might be useful to know which management and visualization tools are available for Apache Ignite. Next, we will wrap up our discussion with demo and roadmap and plans for Apache Ignite as SQL Grid component in particular. So, let's kick off our discussion. So, this is kind of a brief overview on what Apache Ignite and memory data fabric is. As I said, it's just a combination of different building blocks. So, the first one, the initial one, is advanced clustering. So, relying on this component, your distributed machines can auto discover each other across the network and form a single cluster. Once this happens, you can leverage from our data grid component, which stays above. Data grid component, what it does, it just basically shards, spreads out all the data uniformly across all the machines that are available. And also, data grid plus advanced clustering component, they do care about each high availability and fault tolerance. So, if your cluster state changes, for instance, if one of the nodes goes down, or if the new participant node joins the cluster, data grid, along with the clustering component, will automatically accept this new participant and will rebalance the data so that all the nodes store seamlessly equal amount of data across all the machines you have. Next, on top of this data grid, you can execute computations. Our compute grid is just our own implementation of our distributed MapReduce framework. Service grid is needed if you kind of build a microservices-based architecture and you want us to take care of load balancing, lifecycle, high availability and fault tolerance of your services. So, we will just deploy them across your Apache Nite cluster and you just need to provide us with your implementation. Streaming is also that you can usually find in a variety of memory data fabrics. It's just an ability to process and ingest data in real time. Also, as for the file system, so file system is interesting for Hadoop users. So, our file system is HDFS compliant, meaning that with spending some time configuring our in-memory file system, you can plug it in between your HDFS cluster and your HDFS-based application and you can just get performance boost because a portion of your data will be cached in memory. And the main topic of our talk for today is SQL grid, which is essentially a component that gives you an ability to use well-known and familiar SQL queries that will be executed across the cluster of all your machines available and this component, usually without any exception, guarantees to return consistent and valid results set. No matter where the data is, whether the cluster is stable or not. So, it's something that we do care about. You have to take a look at this picture. So, the success, so how does this SQL grid work? So, you have to take a look at the very bottom. Here is we have our cluster of machines, like a couple of servers. And every server, as I said, is responsible for a subset of the data you have. And in the same time, at the same, every server maintains indexes for this subset of the data. Next, this is kind of a data layer. This is kind of your storage. Think of it as of your Postgres database, like the place where all the data is located. Your database, so you as a Postgres database user, usually you tight a patch ignite with the database below this layer, meaning that every, if you need to persist your data to disk, to Postgres database, then when every update happens in memory, it will be automatically propagated to Postgres database so that all the data that is stored in memory and then disk is consistent. This is something that we do on our own. You just need to enable special interfaces in our implementation, that's it. And then on top of this, let's say you have the upper layer is your application. It can be Java application, the net application, Python, PHP, whatever you like to work with. And applications, how do they interact with the cluster of machines? For languages like Java.net or C++, we developed native libraries so that if to talk about SQL queries, you just need to use our jar file or .NET DLL file and execute SQL queries over the special interfaces provided over there. But let's say if you are a Python user or a Ruby or PHP user, or for instance, your existing Java application is based on GDBC driver, then you can leverage from our GDBC driver or from our GDBC driver, which is actually exciting because in general, presently, you need to do just minor modifications if you want to migrate some GDBC or GDBC-based application from, let's say from Postgres to Ignite. But in the future, we plan to even to remove and eliminate this at all. So more details, a bit more details about our SQL grid. So as I mentioned, our SQL grid engine is NC99 compliant. So every aggregation group by certain functions are supported out of the box without any exception. At the same time, we do support joints, including cross joints. We can join the data that is located across the cluster on different nodes. I will just give more details on how we do this. Plus, if to go more deeply into details, how did we achieve, how we achieved this, how we supported this NC99, how we became NC99 compliant system. In general, we internally, Apache Ignite is integrated with H2 database. And we use H2 database only for the sake of clearly execution plan generation and for other optimizations related to the execution plan. But then H2 database is a kind of single machine instance, meaning that you can't make it a distributed system. But what we do, we took H2 database and build a distributed SQL engine relying on H2 and on our own implementation internals. This is what we actually did. And having this distributed system, we do support, we can guarantee that the data, so that the queries, if you kind of execute queries, at the time your cluster is unstable, meaning that some of the nodes leave the cluster, we do guarantee that without any exception, our query will return a consistent result. At least if there is some of the nodes that still has a subset of the data that is needed for successful query execution. And for sure, it will not be useful and beneficial to use any SQL-like engine without supporting indexes. So we allow to create single column indexes, group indexes, and there are some of the ways how you can do this. Using, let's say, you can do this from Java, but netcode or you can do it from Spring XML configuration and soon in a couple of months, you will be able to do this using DDL, Data Definition Language Statements. Okay, so let's take a look at the queries execution plan. Specifically, I want to talk about how we join the data. So on this picture, you can see that our cluster consists of three different nodes. Every node is primary for a subset of the data. And at the same time, every node maintains indexes for this data set. Your application, we call it Client over here, connects to this cluster. Let's say using Java API or DBC or the DBC driver, it doesn't matter. And then it issues and SQL query. Let it be a simple query that tries to join the data located in organizations and employees table. If you want to, the most performant way to execute join like query and to get a consistent result set is to collocate the data. So there is a notion of affinity collocation, meaning that let's say that in your organization, you have a record for Apple Corporation. And using your query, you want to gather, you want to get all the employees that work for Apple Corporations, filtering them out on some specific fields. And then in the application layer, what you do, you collocate your organization with your employees with a specific organization, meaning that if you do this, then all employees that work for that work for Apple Corporation, they will reside, they will be stored on the same node where Apple Corporation record is. So let's say that Apple Corporation record is located on node one and leveraging from affinity collocation, all the Apple employees will go and be stored there as well. What gives us, what do we get from this? Using this ability, we can execute our queries which joins in so-called collocated mode. As it's shown, so when your client application issues a query, the query is sent right away without any modification to the server nodes. Then the same query is executed over the local dataset, over the dataset that is presented on this specific machine. And then every node returns to the client a partial result set. And on the client side, we reduce this result set and give it back to your application. The advantages of these collocated execution is that we do avoid data movement across nodes. At the time the nodes need to join the data. And this is the fastest way you can execute SQL queries with joins. But at the same time, based on our practice, based on our experience and according to feedback of our users, we do know, we realized that it's sometimes, it's impossible to collocate all the data if you want to execute all the queries with joins that are needed for your business use case. And this is why some point of time we decided to support non-collocated SQL queries. The only difference of these queries from the previous one is that at the time when the query with join is being executed on every participant node, at the time a node needs to join the data, it might go to the other nodes in order to preload missing records from there because every node guarantees just to provide a consistent result set and it needs to get all the data that is needed to join all the records. And this is what actually happens and what's shown in this picture. But we do, this mode is disabled by default because depending on the amount of data that might be transferred throughout during a join, it can just give you a performance hit but what you get back, what you receive in return is that you can support 100% of all the queries, SQL queries that are needed for your application that are needed to successfully execute, successfully implement your business use case in the form of your application. A couple of words about indexing. So presently you can define indexes in just using special annotations like it's shown on the slide. This is a special query, SQL field annotation which is used for Java objects. The same annotation is supported for that net library or you can define your indexes in Spring XML configuration. So the same indexes definition will be enabled for you, so will be supported as a part of our DDL commands. The data that is stored in memory can be located in Java heap region or in a heap region that is not visible to Java garbage collector. So and presently we do have several implementation for indexes. So depending on the place where your data is, so if your data for specific cache, for specific data set like for organizations is located in the heap region, then the indexes will be located and maintained in a heap region as well. And the same is true for the opposite. So if your data is on a heap region, the indexes will be located in Java heap as well. But going forward, so this is, but in the next major release of Apache Ignite we plan to discontinue on heap region itself because presently most of the use cases, most of the users that use Apache Ignite, they operate on data sets, data sets measured in hundreds of gigabytes, terabytes and petabytes of data. And for these data sets, it's kind of, it's essential to put the data to of heap region initially. If you want to admit longstop the world garbage collection poses. A couple of words about our distributed DML. So actually, honestly DML was, DML support was released just a couple of months ago. Before that you were able to modify the data that is stored in your cluster only using basic key value like operations. But now you can just use well-known and familiar statements like insert, update, delete and merge. And here is so when you execute, let's say all the statements are internally, when you execute all the queues, like I just want to clarify this. If to just imagine that you have a system with Apache Ignite and Postgres as a relational database that precedes the data, then all the queries we are talking about at the moment are executed over the data set that is in memory. So we don't execute SQL queries over Postgres database. Meaning that when you execute, let's say insert, update, operation, first the update happens in memory and then Apache Ignite automatically goes to Postgres and updates data here in right through mode right behind mode. It can do it transactionally depending on your configuration, whatever you defined. So you just need to create and deploy cluster in such a way that you can fit all those 20 terabytes is that you can hold all these 20 terabytes in memory. Yeah, they have to be in memory if you use Postgres as a persistence layer. So there are some kind of movements in the direction of having all distributed disk storage, but it's something that is in the future. And if you use a relational database like Postgres and you want to leverage from our SQL queries, you have to preload all the data in memory. So there is no, we don't go to Postgres if you execute SQL queries. If you issue key value queries and let's say if you want to get key for, if you want to get a value for key 10, if this key is not located in memory, then we can go to Postgres and preload it from there. But if you execute SQL queries, no, we just execute it over there. Sure, there is always a way, there is always a way, but our general recommendation is that if you have a relational database like Postgres as a persistence storage, then we do suggest to issue all the updates over the memory layer. Otherwise, you need to sync up the data that is in memory in some other way. But in general, yes, so usually you partition, right? So partition is one sort of the minimal unity we operate on, right? But when you load the data, you don't load the data for partition, you load data for an entire cache, for let's say for an entire data set like for organizations. So this is how it works. So as for the management. So today during the demonstration, I will be using Apache Ignite Web Console. It's in a nutshell, it's a configuration wizard that helps you to prepare a configuration for your Apache Ignite cluster. Next, when you are done with your configuration, you can connect to your deployed cluster and monitor different metrics like CPU usage, memory utilization, and so on so forth. And at the same time, you can manage it. Like start new caches, stop some of the nodes, restart them, and so on so forth. At the same time, one of the features that is a part of the configuration wizard is an ability to import a scheme from your database. You can connect, this feature requires you to provide a GDBC compliant driver. And using this driver, the web console connects to your database, grabs scheme and tables definitions from there and creates a similar Apache Ignite caches for you is all the indexes you had on disk. And after that, once you have this configuration using your application code, you can trigger the preloading from your database if you want to enable write through and reads through mode between your in-memory layer and your persistence layer. Actually, next, at the time when you're developing your application based on Apache Ignite, and you want just to test some queries, or when you're in production and you just also want to execute some queries together metrics, you can go to web consoles, SQL queries tab, and issue selecting the email queries from there. And at the same time, you can see execution plans. If you're struggling with some sluggish query, queries monitoring is also useful if you want to see a kind of a picture. What happened to your cluster? What sort of queries were executed in some period of time? And if you spot slow queries, you can check the execution plan. You can adjust, alter them in order to make them faster. Or for instance, as an IT administrator, you can go to presently running queries tab and you can spot long running queries that might utilize your memory and CPU resources significantly. And using this tab, you can stop, alter some long running queries. This web console is a kind of a part of Apache Ignite, so project, but at the same time, since Apache Ignite has its DBC and GDBC drivers, you can easily connect it, you can easily connect to it from other tools like Apache Zeppelin. For Apache Zeppelin, you just need to use Apache Ignite's GDBC driver, and you can execute a variety of SQL queries that are NC99 compliant. Also, if you purchased a data analysis tool like Tableau, just take our DBC driver and observe and analyze the data that is stored cluster-wide using Tableau. Okay, now, and we came to the point of the demo. So the demo is kind of simple and straightforward, so I'm not going to deploy a large cache, a large cluster. I just want to, for the sake of the demo, and just to save more time for the questions and answers, I'm going to deploy a single node cluster on my laptop. I have pre-installed a Postgres database that has, it's a kind of sample database that stores all the countries with all the cities worldwide and it store population for every city. Now what I want to do, so I just want to prepare a configuration for my Apache Ignite cluster. I don't want to prepare it from scratch. I want to connect to my Postgres database and import scheme from there. So here is my Apache Ignite instance. So actually this is kind of instance of Apache Ignite as it is deployed on grid-gain infrastructure, but nothing prevents you from going to Apache Ignite and compile Apache Ignite web console on your own and deploy on your own cloud or on-premise. You will be just leveraging from the deployment that grid-gain has. And this is our configuration wizard. So let's start with the definition of our cluster. So here is I want to define, I need to define the name for my cluster and if you take a look at the other parameters, the one that might be interesting to you is Discovery SPI. This is a tiny component that is used by every node that attempts to join the cluster and this component just provides, gives every single node an ability to find out IP addresses of all the nodes that might be that are in the cluster or might join the cluster in the future. So by default we use multicast IP finder, meaning that when every node joined the cluster, the node will use multicast protocol to send its own IP addresses and to get addresses of the other nodes. But for this demo, it's more than enough to use so-called static IP finder where I just predefined a list of TCP-AP addresses where my cluster nodes will be deployed. Next. The next part here is we will try to connect to my database. I want to import the data from there. So this is Postgres driver. This is my connection string to my local database. Now we are going to the next window. I want to import only public scheme. Here is we see that I have only three tables over there and for every table we will create a respective cache configuration. So every cache will be partitioned, which means that every node will store on this a subset of the data in memory. And here is you can just define additional parameters. I will skip them for now. Okay, the model is ready. And here is we see that we prepared, also we prepared kind of Java project classes for your application. If you want to deserialize your data that is stored in your cluster. Plus here is our Apache Ignite scheme. So here is we have all the fields that might be used in your SQL queries. This is one of the indexes that were defined in your Postgres database. And we define the same indexes for Apache Ignite in memory data fabric. And the same was done for other caches, for other tables. This is our scheme. Also as I said we prepared a configuration for every table we defined cache configuration. Partitioned if you want to make this cache transactional you can quickly do it by switching this parameter. Also if you want to introduce some redundancy level, if you want that by default there is only primary copy stored in your cluster. But if in addition to the primary copy you want to have a node that will store a backup copy for one of the partitions you need to increase this level, this parameter. And here is you just define what is stored, what is memory mode. And if it's a partition cache, if it's a partition cache it means that in the partition scenario a cache is split on let's say on 1,000 shards of partitions. And let's say the first node will be primary for partition 10. But it will be a backup node for partition 12. The other node will be backup for partition 10 but will be primary for partition 12. So this is how it works. It's not a full replica for partition cache. If you want to achieve full replication or for the whole data set across all the cluster machines you need to use a replicated level. And final parameter I want to use here is the history size. I want just to demonstrate. So the history size means that every query that is executed on a node will be stored for the sake of monitoring. I just want to store a hundred latest queries. And the same I want to store for country cache. It's the latest hundred queries. Okay, let's keep these two parameters and go directly to the summary tab. So here it is. So the web console prepared Apache Ignite configuration for us, some of the Java classes, Docker file, Pomex ML, so just let's go ahead and download this project. So for instance you can also download a project with some template classes. Here it is. Actually I don't know this product before, this project before, the same, absolutely the same project. Here is we have the configuration that was prepared by Apache Ignite AppCancell. The only thing that I changed is that I just defined these connections to any parameters so that Apache Ignite can connect to my Postgres database in order to preload the data from disk and then to enable write through behavior, meaning that when the data is updated in memory, the update will go to the persistence layer. So let's just start one single Apache Ignite node. So this is how it looks like. I just need to pass this configuration as a parameter. And while the node is being started, let's go to our web console for the monitoring tab. I want to see, yeah, here is my node. Single, the node, the only one node I have in the cluster. Here you can see CPU heap usage and the caches that are deployed by the caches are empty. So let's preload a web console defined a special template class for us that preloads the data from the Postgres database. Here is we will connect to the cluster and we'll trigger the preloading for all the caches we have. The database is tiny, but it should be enough for us to demonstrate kind of the capabilities, just basic capabilities of our SQL engine. So it looks like we preloaded, so we just had less than a megabyte of data, which is absolutely not a kind of a general use case, just a demonstration. And now what we can do, having the data in memory, we can go to the queries tab. Here is I just have some of the predefined queries. For instance, here I'm trying to find out the most populated countries in the world. If you execute this query, we will find out that the China, if the most populated, the United States take the third place. You can see execution plan, for instance, if you needed this, if you want to debug your query. And there are different output charts that might be supported by, for some of the queries, but here we just use aggregation, it's not, looks like we can't use other output formats. Okay, let's go below. Here is I have a kind of query with join. So I want to return, I want to get top three inhabited cities in the world, not only in the world, in the countries like United States, Russia and China. So, and we can find out that the most inhabited town is Shanghai, the Moscow goes the next, and then we have here New York. Next, the next part is, let's demonstrate the right through capabilities. Here is I have some city in memory. Somewhere I hope in New Zealand. Let's double check that I have the same city in postgres database. Here it is. The population is the same. And now what I want to do, I just want to update the population to some other value. And I want to make sure that the data will be updated both in memory and on disk. Here it is. So the update completed. Let's check that the memory part, yeah, the memory part was modified and the same should happen to postgres. Okay, so Apache Ignite just automatically went to the postgres and did and performed this update there. Aren't just the same simple and straightforward delete query. So let's just delete this record from memory. Okay, we no longer have this memory. No longer have this record. And it should have been removed from postgres. The same. And final thing, final thing is that while we were executing this queries, the queries statistic was aggregated for us. So if to go to this page, you can see a number of times this query, every query was executed. The minimum execution time, average execution time and so on so forth. And finally, at the table, yes, I think that the table is kind of tiny. It's just, what do you mean? I don't know, so it actually depends because it's a total time that take from this web console to connect to a special agent that stays in between then send the query to it. This agent transferred this query to my cluster then cluster went to the, but in general, no. If you go to our kind of benchmarks, if you take a look at them then for sure you will find out that kind of, you will not get performance heat if you switch to Apache Ignite. In general, as a postgres database user, you might want to use it for this scenario to accelerate your read operations so that all the reads will go to your Apache Ignite cluster. And when you perform writes, the writes should take approximately the same time. So the write will be updated in memory and then on disk. But if you're slower than postgres, for sure no one would use this in production. But it's just, I think that it's just a matter of my laptop. And just a number of communication channels I have between my web console. Because it's also, this is some of the, this application is deployed on some service I need to connect to it over the Wi-Fi. This is what happens. I usually, yeah, usually, yeah, so here is we had, here is we had three tables. And as you see, it took us just in a matter of seconds to import to turn the tables into the caches. So this is what you usually, you as Apache Ignite user usually do is the time you calculate, you just do capacity planning for your cluster. So you can calculate how much room you need to have in memory. If let's say if you have some data set on disk and then you know what you can afford and in terms of your hardware. For instance, we have one of the customers that has a kind of a cluster of 1,000 machines and every machine can store up to terabyte of data. Meaning that, for instance, for this customer, it might be enough to have only two or one node in Apache Ignite cluster. If you store, if you, let's say, if you can't afford buying such expensive machines, you can buy machines that have, let's say, 64 gigabytes of data for them. And then you just need to have, let's say, 10 machines in your cluster, 20 machines in your cluster, depending on your capacity planning. And we just scale out. This is how it happens. This is the bread and butter of Apache Ignite in memory data fabric. If you see, if you just realize that there is, that you no longer have room on some of your cluster machines, what you do, you just take one more additional hardware, add it to your cluster and the data will be rebalanced across this machine. For this to work, you need to have as much memory as your database size. Say it again, please. For this to work, we need to have as much memory as your database size. If you want the entire database in the cache. Yeah, yeah, yeah. Correct. Then why do we even need this? We can as well in Postgres repair, share memory buffers to have as much size as your database. And most of the database will be already in memory. The question is load balancing. The question is load balancing. So it's good if you use some, if you use some sharding technology created for Postgres, but if you still use some operating system caches on top of your disbay Postgres, all this will be located on a single machine. And when you have, let's say, millions or tens of millions in operations of in some period of time, your single machine might become a bottleneck for you. This is actually what happens with many financial institutions and banks. The people used to use, let's say Oracle Database and they had to buy expensive mainframe machines in order to scale up, in order to keep up with the growing workloads. But there is always kind of an opportunity. If you don't have enough money and you want to boost your performance and you want to buy kind of affordable commodity hardware, then in memory data grids or in memory databases, like, I don't know, MMC, QOLO, VOLTDB, there are also kind of an excellent use case for you because they allow you to scale out, just relying on commodity hardware and they do care about low balancing of your queries. Transactions, we are a seed-compliant system, so we are fully transactional, we support distributed transactions, pessimistic, optimistic, we use different modes like. To face, yeah, we do implement our, we do implement to face commit protocol. So we do support all the types that are supported by H2 database. So everything that is supported by H2, so we, as I said, we use H2 as a kind of, as a SQL engine. So this is what is supported right now. So if you can query the data using SQL, right? Or like, we also support a full-text search queries if you are talking about geospatial queries, you just need to place the data into your cluster. So basically, we do store data in our own binary format, so a special protocol, the special serialization protocol we implemented. It should remind you just some like format. And then, based on this kind of protocol, we can execute SQL queries, full-text search queries, whatever you like. What's that SQL query? NC99 queries, NC99, I mean that. So we don't support, we don't support Postgres, MySQL, or Oracle syntax. We only support kind of everything that is defined in the specification, yeah. So H2 is kind of, H2 kind of NC99 compliant database and this makes us NC99 compliant distributed engine as well, this is what we did. But it's kind of, you know, it's, I think that. It's, I do agree that it kind of depends on the use case, for sure, but a lot of our users and customers, at least for OLTP workloads, for OLTP workloads, it was not an issue for them. No one complained, so significantly complained at the time they were migrating, let's say, from Oracle-like solution to Apache Ignite solution. Usually there is kind of some, if something is missing, there is always, we can find kind of a workaround if you need this, but usually there, we will always, the customers will, usually we're able to find some standard alternative for a function, say a custom to use, let's say in Postgres or Oracle database. So we don't support stored procedures, but we do allow define your own functions, like if you want to have some function that should be executed on the server side, you can do this and trigger it, call it from your SQL query, and at the same time, if you need just, it's procedures in relational words, equivalent, in my understanding, to map reduce like frameworks in memory data grid worlds. So you just can create a special computation and send it to your cluster, this is what usually can be done. We need to, sometimes you need, sometimes you don't, it just depends on what are you doing. This is good for a subset. Yeah, some, I don't know. It depends on industry, right? Yes, for financial, maybe it's okay, but for scientific... No, no, no, this is what I was saying, so this is what I'm saying. So presently, at the moment, Apache Ignite is mostly used for OLTP workloads, and there is a kind of minor part of customers for OLAP realm, but as for OLTP workloads, we do have a lot of, most of our customers and users are from financial institutions like Sberbank, the biggest Barclays, ING, CET, all of them use us as a kind of for SQL execution and so on so forth. Plus we have customers from Telecom, so from IT, all the kind of stuff. But for sure, I'm not saying that it's kind of, we are not competing to Postgres. We just, here is what I'm talking, we just try to make it faster for some of the use cases Apache Ignite might be useful for. This is what I'm saying. Yeah, we can talk, so, yeah. Yeah, there are still, there is always, there are always different options, yeah, you're correct. So it's up to you, it's just what you like to use more, what you need at this particular moment, so. So you can just look, not only at Apache Ignite, but yeah, so actually I'm done. I just quickly can show you the kind of the final slide, just 30 seconds. Here is our roadmap that is related to SQL, so we do, presently when you define the indexes, you can define them in a static manner only, meaning that you can alter them when your cluster is up and running. This will be fixed and improved in a month. And also we will support distributed DDL soon. So present, so you will be able to create caches, alter indexes, drop indexes and caches using standard DDL commands. And also we do spend most of our efforts improving performance for, first for OLTP workloads and partially for specific OLAP workloads. Thanks for your time. If you have any questions, we can talk about it. By default we don't, but we do have several ways how you can do it on your own using your application code so that if you have some, your own compressor or you can do it.