 And, sorry, because we'll talk about this in a minute, so please put our hands together for our partners to look at. Can you hear me? Maybe with the mic. Can you hear me? Yeah. Cool. So, welcome. It's my first time in Singapore. I love this city. And I like to present today the multi-model concept. So, we're after a piece, and we're talking about the after a piece of the multi-model. How many of you are really familiar with crowd databases? How many of you are already using non-suspect database? I'm going to be... Okay, cool. So, we can start. I'm sorry to start with the really bad news. So, following up on most of the project, about big data will fail this year. More than half of the projects. Why? Because of this. Today, if you want to create like a super-scanible, faster application, you need the multi-model database, right? So, this is the part of your persistence. This is an example of like community.com, like application. So, if you need to store like a product, maybe a documental database is a good solution, because we just want code, we get all the data, all the data, right? So, maybe Google will be a good solution, right? Then maybe you want to do a recommendation. So, if you like this hotel, maybe you could like this hotel, right? This is the classic combination system. And the graph database is usually used to do a recommendation. So, I'm just making a project because it's the most popular database, graph database. Then you want to store like, you know, transaction, when people pay, and you can trust about transaction, about the basic consistency. So, you want to like have a national database, because, you know, they are transaction. So, I just think my secret could be oracle. So, any national base that may be transaction. Then you want to also search engine, right? So, I want to look for all the hotel in Singapore. So, you need something like elastic search or so on. So, like a search engine across your database. And then, if you want to scale up, you could use something like Rails, or like a cache, or just a store session, right? So, you end up having a lot of databases in your application. And the bad news is that there are no standard, even in the same category, even, for example, this document database is normally the cache to be, there are no standards. So, it's up to you to move data across all the databases. So, when you're writing ETL, or maybe better, the same application is writing data across all the databases, right? So, anything is up to you. And this could be really expensive, because maybe you guys are expert on what would it be but you don't know anything about your project. So, you need more people with different skills, right? And this is super expensive. Even for some, it could be too expensive. So, the main reason is that integrating a lot of databases together could be really, really expensive. So, go back to the big data. So, big data is a lot of data, right? That's fine. I can have a lot of data. But, data, without the relationships, they use a lot of data. So, it's the relationship, actually, that adds incredible value to the data, right? But if you look to the NoSQL databases, we have a model, like, 300 NoSQL databases. But we can say that the main four categories of NoSQL keep the data you've got with the column in graph database. Graph databases are different than the other categories. Why? Who knows why? Why graph databases are different than key data in graph with the column? That makes any guess? No guess? Okay. Because they take care about relationships. Instead of the other databases, there is no trace of relationship at all. If you wanted a relationship with the MongoDB, for example, you had to store the ID and you had to keep, like, the join in your application. What the Russian database actually is offering when doing the application? Of course, it's now, it's already next. And if you look to the history, we have, like, a lot of... Why a lot of else? A lot of else is in the workbook. Because the join cost was so expensive. And for this reason, we flagged entire tables, like, 100 columns in just one day. So, your analytics, your workbook, will be super fast. So, you don't do the job, right? It is down the house. So, why all the non-symbolic categories about the graph databases they attract avoid from relationships? Because the join. The join is in. The join is above. The join is what slows down any relational base. Why? How many of you are familiar with relational bases? Okay. For you. So, the job works in its way. Of course, you create an index, right? So, anytime you cross two kinds of information, you cross the commission by using the ID, right? So, the ID usually is a variety, and a variety. So, okay. But with the index, it should be fast, right? Now, really, why joins are so slow when you have, especially when you have a lot of records? This is how most of the indexes are based on advance journey power. So, if I want to look at my name, there is the classic attribute, right? So, I start in the root of the tree, of the index. I'm looking for L. So, L is on the left. The other level is on the right. Right again. Right again. Follow my name. So, anytime you look for a key or a name in the index, you're doing this, this job. This is easy because it was just five levels of that. But can you imagine if you have a million or a million records, this tree could be very deep, right? If you cross, like, three, four tables with all the joins, you can end up having, like, millions of job operations. And that's why the join is low, because the join is a log-in operation. And this is why, with national pages, I'm sure that everybody can see that this is true. The bigger is the database, the lower are the performance. Because this is the reason, because the index is growing, you can get a much deeper index. So, this is the main reason why you can escape the national base and any database is based on index when you want to resolve the national shift. Cool. But I think that this is the most important for taking the national shift. So, how can you do a national shift without any oldest problem or development pressure on the basis? Let's... So, before you look at how graph database is managed on the national shift, let's take a crash course about graph databases. So, a graph database is just a set of vertices or nodes in the same where you store the data. And edges for arcs is the same where you connect there, right? So, this is the simple graph of Luca and Singapore. So, we have two vertices, two nodes and one national shift. You can see that there are a lot of graphs, types. We have a lot of graph types, but I'm just looking on property graph. Most of the graph databases, they implement the property graph model. So, in this model, an edge has a direction, right? So, Luca visited Singapore. Makes no sense there, of course, right? But the cool thing about the edges is that I can cross in any direction. So, I can start from... If I want to see all the places where Luca was like in 2016, I can start from Luca and get all the outgoing connection, right? If I want to know all the people that visited Singapore this year, I can start from Singapore and get all the incoming connection, right? So, the edges can be crossed in both directions. But when you create an edge, you provide the direction that keeps the meaning of the edge. Of course, you need some properties. So, any vertex can have properties. Usually, graph databases, they work as key models. So, you just put properties, like the document. But you can also store properties on the edge. You can say Luca visited Singapore in 2016, for example, okay? So, you can approach some vertices and edges. An edge can connect only two vertices. So, how can you do one two-end of an edge? I can just use multiple edges. And every single edge can have different properties, right? Okay, so, with graph databases, you can use a graph database like an edge database. So, for example, you can do static start from customer, you get all the results. And it just works. But with graph databases, you can use a different path. You can use this other set of query. How does it work? I can have all my client connected to a root vertex called customer. So, I can create a line. So, the blue nodes, the blue vertex are the metagraph. So, the additional information that I'm storing in the database for just the digital information quickly, right? So, the green one is actually my database. I can create a vertex, a root vertex customer where I connect all the customers. So, if I want to know all my customers, I can start from the customers as a root node and cross all the outgoing connections, right? I can create a special customer root vertex where I connect all the special customers. So, if I want to know any of the special customers, I can do static start from customer or whatever. But, I can just draw the relationship between the customer and all the customer connected, right? And the same for starts. Since everything is connected to the graph, I can move everywhere in any direction. If I want to know, hey, give me who bought the white stone. I can start from the start, get the white stone, get the incoming connection to get the order, and get the game. So, incoming, incoming. I want to get a G. So, G or chase the white stone, right? So, I can know outgoing, incoming, I can move in any direction in the graph, but again, how can they manage your relationship without using a joint? So, with the trick, it's the industry efficiency. So, with the graph to cross the relationship, you don't use the index at all. How does it work? It's very easy. We add pointers, persistent pointers. It's like memory pointers, but they are the same. So, anytime you create a vertex or an edge, only we and any graph on the base assign a new capital I.T. This number. So, the first one is the south table where we store, and this is the position to the table. So, it's an absolute position of that information in the database. So, when orders are connected to different orders, actually it contains a list of pointers. It contains a couple of pointers. Right? So, when I cross the relationship in the graph on the base, actually it's always a constant operation, because I do not look into the index table to look for my actual position record. I already know the position record, because the store has a pointer in the relationship. Right? So, this means that we jump from a lock and a whip to a constant one. Or one. This means that, no matter if you want or want the medium of vertices, the performance of traversing the relationship remains constant. It's always the same. And this is also for the data. That's it. There is no more for the graph. So, it looks like I have to do my homework. I don't remember which crew did graphs. That's it. Vertices, edges there is no more. Okay. So, graphs are so simple to be so powerful. Let's go back to the polynomial persistence. So, the polynomial persistence is just a word to say two words, actually, to say that you are using multiple databases in the same application. And if we look today at the history, we can see that machine learning was started in 1970. For a lot of time in 2009, when the most secret databases actually in the middle there were my popular databases, but now they are used just for niche markets. Anyway, starting from 2009 it was now forgotten that the database has to be nationalized. So, I can use the document database and call it a database. And in 2015 we started to see the world multimodal database. Somebody called it 0.0. So, what about each model? So, which is this kind of evolution of the non-secular databases. So, multimodal is a one database able to handle different models at the same time. So, I can have graphs, documents, key value spatial, full text, object oriented and reactive. This is just example, but I can add additional models. So, with the multimodal database, the same engine is able to manage all these models at the same time. So, when I work with... So, maybe I can have my documents, but I can't make my documents like in the multimodal database. When we started with the multimodal nobody was listening, so we were pretty much alone. And actually at the beginning it was pretty hard because nobody knew that they needed a multimodal database. So, we started to address the graph at the end of the day for a long time. Today multimodal is very popular, but before today I can say the milestone, when the multimodal became self-became my... not probably one, at least considered as an equity, was in 2012 during the keynote in non-secular conference, when for the first time the multimodal world has been introduced in the database world. And today, we can say that it's very popular. And that starts, so, Cassandra and no one will be, they are multimodal today. So, multimodal now is popular. So, the leading non-secular databases they are not to be, but they are already multimodal, right? So, there are two kinds of multimodal databases. Database. We have a multimodal database where all the model live in the engine. So, the engine actually is able to build the model at the same time. And you can build different layers on top of an existing engine. Like, maybe we have a key value, we have a rational base, we can adjust maybe like a graph database layer. The problem with the layers is that you think like a multimodal database but you have a lot of transformation because anytime you have a layer you transform the graph to tables for example and vice versa. So, the layers are very expensive. And, but today you can find multimodal databases that they kind of do multimodal and this is true but they are just the top of an existing engine. So, a 90% ODB was the first multimodal base and it is a native multimodal base. And it's open source evangelized so you can use for any purpose, for free. There is also the enterprise edition, this commercial but we are now going into the competition. So, a multimodal base of course has a lot of features about the databases at the same time, right? So, some of the features that are not present in the database, for example we reacted more than I tried to explain later. But anyway, we just want the database you can have different features without using multimodal bases in our application. Okay, so, what about the design of the database? So, we'd like to look in my application like before, this is super simple of course, it's more complex. But at least I have a user and a product. So, the product and my hotel So, user created an order, I want to book this hotel, right? Maybe after I book the hotel and it was great, I want to write a review about the product. So, why I'm booking the hotel, and you see book.com like other 20 people are booking the same hotel at the same time, right? So, we want to monitor the session in real time. So, if you want to use auto persistence like in the second slide of this talk maybe you have to use different databases like the user for example has to preside on the cache on the recommendation system because the user is valued in their views and also in their order, right? So, you have a lot of duplication data and it's up to you to keep all these data synchronized because there is not standard to keep this data synchronized for you, so it's up to you and anytime you change data you have to change maybe the ETL or this information in your order database. So, it's very, very complex. The multimodal analysis of course your model is pure it's plain. Just the multimodal database is able to handle any kind of a complex domain with just one model. So, there are a lot of moments in the multimodal, right? So, we have graphs, documents I'm not talking about documental databases and other features that maybe everybody knows but I want to put the attention on the React model. This is something very, very new. Anybody React in manifesto or React in JS, for example? Okay. So, let's talk about the idea of React. So, anytime you use the database that you are in production, maybe a national database or any machine code if you want to get the updates, right you have to create the database hey, you have updates for me? No. You have updates for me? No. You don't want to keep the server, right? So, maybe you are not doing this every five seconds, right? So, you are wasting a lot of resources on the server because maybe you are looking for a data that is not in the database for a long time. If you think like when you are in April, right you have all the clients and every time there is a delay you want to be like immediately to get the updates, right? So, can you imagine if you were writing a physical classification can you imagine if you are logged in Facebook and you are applying a message or new friendship request can you imagine if the client is calling physical database every second so if you just want to serve one of the clients you will keep any server, right? That is good. This is what I have. So, I look for updates for a long time but when their data actually arrives maybe if I am lucky I get the data in real time maybe I have to wait five seconds, right keep the next check. So, I have a lot of wasted resources I think more than that I can choose to query the database in a partnership mechanism so, if the same set, the same query same SQL, probably the live prefix. In this way the client just is listening for one multiple prints and every time there is a record that matches the word condition, then the data is sent to the client immediately in a push-up to the server to send a message to the client, right? So, I will use my query at the beginning, I publish a stripe and as soon as the update is arriving from external source for example I get the update immediately. This is real time without wasting any resources because the client is just waiting, right? This is just a way to speed up all the application in real time in days. Okay, so in the multimodal base actually, if you want to credit boarders, you can say that as the complexity of the relationship and data complexity the graph actually are they can handle complexity of the relationship, but they cannot handle complexity data. For example, Neo4j is just the leader of graph databases they don't support data if you want to store your salary you have to use a double with a losing position or you can store set least outraised inside the reference. Instead with the top model base, you can have a embedded object, least outraised by the maps but you have a number of nationalities so with the multimodal base actually is the best of all the boards. We can see that the question of base was actually, they were maybe they succeeded for a long time for this reason because they actually were a good compromise between each other. Cool, so let's see there. Yes. Just know you have a problem you have time to fix that for example? Yeah, you can subscribe to one of the queries this entire time. So you can subscribe to multiple queries this entire time. Any scenario? Yeah, for example, in Facebook I want to look for new messages and new friendship requests and maybe the new line, whatever Facebook call. So I had to satisfy the three different queries for messages, for friendship requests and to update the stream of Facebook for example. So I didn't post three different queries actually. So you can post any query you are interested in and the client just waiting for one of them. Question? We can have a question answer at the end of the presentation. Your data structure is it the same for all the different models? We Yes it's a complex question. Yes, if you use the graph for example interface, you can still use the domain interface. So all the interfaces they work on the same way. So we have the kind of structure that is flexible and it's the same for all the models. It's the same structure. Which one? Was one at the beginning? This one? Okay. These are the reason why the power of existence is very expensive. Maybe if you are Facebook or LinkedIn, you can afford this. Usually a small company or a knowledge budget is less expensive. Cool. So let's go to the demo. If you go on the website you can go to www.collevi.com to download the distribution of Justice Z file. You can go to the website and you will have this kind of directory. This is the directory of OUV. If you go on Bing, you can find server SAH or server.bat if you are using Windows. If you double click on server.sh the OUV is done. You can see that the free style is asking for the password. We have just run before the presentation of the password. You can see that OUV is listed on two different forms. The 2424 is the binary protocol usually introduced by the drivers and the 2480 is the HTTP. Since it is an HTTP server I can use OUV by just using a cool or a browser. And since I can use a browser I can also connect to OUV by using these hundreds. Let's see. This is the tool that we provide in the presentation. It's the way that you can interact with the data. Now I have stored three different data pages and when you store them there is a graph to the concert graph. But you can download any other example data pages by clicking on the cloud. There are gamotrons, beer, whiskey, database. So you can play with the data. Can I ask you if you have any questions? Can I ask you if you have any questions? Yes. So we are looking only to the competition. I just stored the gamotrons. I used a password and this is the crowd. So here I can type any query. So OUV works with SQL and this is good news because everybody knows SQL. So I can select from B. B, we don't have table, we have classes because any is an object oriented in some other way. So B is a base class for vertices and E for edges. If you want to create a query by customers as vertex you can extend B. So select star from B. This is the result. This is the this is the record. This is the class. So I just create the base class. So I get all the practices. We prefer to provide representation because everybody is familiar with tables, right? Everybody is coming from Russian language, Excel. So the class is rather more basically just table. If you want to show the single subset as a graph you can select from this column on the top right and you can click the graph. So with the double click you are losing all the connection. So this guy actually is the Game of Thrones. I don't know if you are a fan of the TV show but there are all the people and the battles of the country. So I can see all the relationships. And if you click on the vertex I can see all the properties on the left. So the name, title, blah blah. I can also change the icon the color of the site. And every time I change this representation also the user they see the same icon. So I can share actually the same visualization with other users. And this is a tool that we provide in bundled. So by looking at the graph it's very easy even if they don't know anything about IT but they can understand actually the data because they can see the data and the relationships between data. I can even change the data in this way. I also have, right? You can see with just double click I'm just building edges and connections. And we see the other connections. Okay. So I can zoom in and zoom out to be close to what Neo4j is providing. But we're doing something more. It's not just the visualization tool but you can interact with the graph. For example, I want to create a new connection between this guy. So if I create on the edge, I can create a link. You see I'm creating an edge right now. I want to create an edge against this location. So every time I create a new edge it asks for the class. The class of the relationship. This is all the class of the relationship. Like as a founder. Since the graph of the basic working schema I can add properties to my edge. So I can create a fear of when that is type of data, right? Okay. So I can pick type of data. This property adjusts to this edge. It's not for all the edges of the scouts. So I can work schema with every single instance and different properties or it can be a defense schema. I have the choice. So I can actually interact with the graph, change the relationship like I want to destroy this edge. So this representation is working and I can work with the graph. Okay, so we have a schema. In this case I have a lot of classes as references like animal, man, model, cast a lot of them. I can add classes and you can see that I can create indexes. But you remember that graph data exists and not indexes, right? Actually we don't use indexes to cross the relationships because we are the direct pointers. But if you're looking for like give me the client in the name Roka we need a index to justify to the record with the name Roka. So usually the graph data base of the first time just looks for the root vertex. You can cross the graph in any direction. So the problem with the graph data base is like a huge web and you are the spider. So you are not creating the database anymore but you are the spider that you move across this web to do your research to get your data, right? Okay. We are based on security. We have user role, user can and you can decide that one role can read or write these class or can delete these class, okay? We also have regular security. So in the same class, for example you create a class development and you are like a CMS application. We have two different users that write in the same class but you can define it will be a regular security where every user can just pre-read the access to their own documents. And this is perfect if you have multi-tenant application with the same database. We have SQL but you can extend our SQL in an easy way. For example, I like to create like hello world function in JavaScript and show you support any language on top of the JVM. So it could be old source, call and any other language. I need a parameter like name, right? And I want to return the parameter. I can save my function and I can even test here if I use a lookup you can see this is the result. Right? Hello? So my function is multi-tenant. The cool thing is that I can call this function immediately from the SQL. So if I can see it clearly like before but I don't want to see it. I want to see it. Hello. So now it's calling the name hello world and they feed the name to the function. Right? So I can extend the SQL language but it's just JavaScript. So there is no property language. Property language. And the same function, all the function automatically is post on a SDK server. So if I am working 24KD function this one. Let me zoom in. This is in the file that I used in Apple. And since we don't know if you are changing the database you are using it yet. So you have to declare that I am not changing the database. So I am complying with the HTTP protocol with the database. So let me go back. Let me click on the database. Save. Try again. OK. Hello world. So all the function automatically on the HTTP server. This is perfect if you want to create microservices but just use all of them. And if you want to scale up with a different server you can just enable it on the servers. You have data and function to replicate it. So what? I can even exhibit a query. I can do a lot of things here. For example I can need to audit it. I don't know about the syntax so maybe this is not working. Query. It's optional. So the syntax is not working. Maybe it's not working. Let me try it again. There is the main object to get at this instance and you can execute query, enter the data and whatever you want. OK. With enterprise initials actually you can multi-do it by auditing ct provider in kind of a backup. So there are additional features to the combination. But this is the free version. If you want to play more with the game of thrones, we publish in there with the database just using the browser. So the multi-doer databases are so flexible that they can be used they can replace a lot of databases. Right? We have many focus on security too. Just a quick look. So Barclays is also the canvas plugin. We have like a review of the HTTP server. We have strong encryption everywhere. We have soft password. So it's very strong in security. Even though most of us are not security they don't really care about security. What about this security? We have a multi-master or active replication model. This is something new for graph databases. For example, Neo4j has just a master's label. So with a master's label usually if you're one of the servers no matter how many servers you have the bottleneck is with the master. So all the write has to go to the master. The multi-master database you can pick any node to read and write. So we normally have an automatic discovery system. So if the multicast is enabled usually in the cloud you have to define the IP. So we have the plugin that we get the IP from the configuration. So let's take an example. You're probably just one server but you need more power. So after a while it's something that is passive because I had more users than one week ago. So you start a new server when the user is started it's actually looking for the caster. It's a caster. If the caster is in the user console matches it joins the caster. Automatically the database is deployed to the new server. As soon as the database is deployed and it's coming online all the clients, the existing client connected to the previous node they are notified with a push message about the existence of the user. And the same with the server is available. So all the clients always know about the caster configuration. These are out to the client that one node is applicable. They can actually repeat the transaction and again another node. This is trustworthy. We get the failure and we automatically try the same operation with the user. So we have no failure in the application. Everything happens under the code. We have transaction. So we have a transaction on the base. It's not very common. It's not simple. Actually the base consistency works for a lot of use cases but sometimes with the transaction. I've seen a lot of cases where there is a transaction and of course it doesn't work very well and this is a lot by far. So we have a very strong consistent database but you can like the consistency to be a better consistency if you want. I have a question with regards to the you think that this is an eventually consistent setup but how do you guarantee the testing transactions? By default we have a strong consistency setup. So by default we have a portal mechanism. So the portal by default is the majority. So for example with three servers at a time we apply a new record it has to wait that two servers actually reply okay. As soon as the majority in case of three servers is two we provide the okay to the client. So in this way the database is always consistent and if you background any servers on the with the delay and we already provide the okay to the client and the other server doesn't agree with the portal is the server that is the coordinator just reply force the other server to act the same way because the same value is owned by the majority. So the other base is always consistent. You can realize the reputation probability to act to be a better consistency. How does this work across the server? Good question. What about across the server? With the device initial we provide the plugin that you can define in the portal is local to the data center. So you don't pay the cost of the server in another data center. So we have like a local portal feature. But this is just a price. If you want to use the community we suggest to avoid it to pay the reputation cost to use a sequence of application. In this way you are relaxed consistency. So it's consistent but at least you remain fast. You don't pay you don't weigh in that the server in the next data center is easy to respond to. Okay. This is zero config. So there is no complexity just other server that automatically hold the job for you. You can specify the portal the consistency method and other settings but by default just works in a strong consistency model. There's another interesting way to do it and actually develop a client that is going to be this way is in a better mode. So if you are lucky enough to use a Java language for the application or any language on top of the JVM you can embed the old server in the same JVM of the client. So server and client they will send the same JVM. This means that you avoid completely DCPAP mental trust. And this is blazing fast. And in this way you can see that the application. So you can add a lot of audio boxes. You can add a lot of random for these boxes where the application and the server lives in the same JVM. This is the best configuration for performance of course. Any question? Absolutely. Cool. What about the integration of the different sources? We have a JVC client. So you can interact with OED and they are just working on top of the JVC client. Sorry for my noise. It's almost gone. But we provide the ETL. It's very simple ETL. So you can use it wherever ETL you want through the JVC client. We have our own ETL X3 using the configuration on the right. You can see just a matter of adjustment configuration. So ETL means extraction, transformation and loading. So when you want to extract data from a source it's very simple and important. With this example I'm extracting from a SQL. You can see that there is a driver. This is the interpreter available with the communication. Absolutely. In this case I'm doing starting start from client from a SQL you can see the JVC parameters. I'm transforming like whatever comes is the class client and I'm storing it inside this OED. That's it. This is a simple adjustment I'm loading an entire table from a SQL to OED. We also created interpreter. interpreter is an automatic tool when you want to synchronize an entire database to OED. It's just one click and it just works. The tool is smart enough to understand if a user, for example, item needs. So we are doing like ineditance, not using item because the national base doesn't understand ineditance, but with OED of database. So we have ineditance in database. So this tool is able to understand that you have to be ineditancing in hyperlink and restore the region in ineditance. This is just enterprise but if you want to import database I suggest you to download the enterprise edition. It's like 45 days trial you can import database and you can start using computation. This is cool if you want to synchronize database every day or every minute. So you can import data but you can keep data synchronized. So in this case we have, for example, Oracle as a master of database because maybe you have a lot of legacy application on top of OED you cannot just replace Oracle with OED. So you can still keep all the legacy applications on top of Oracle and you can synchronize database with OED and the new application could be built on top of OED. Maybe analytics or maybe the application. Maybe after a while you can decide, okay, you know what we can provide this legacy application to work with OED. So if you need like synchronization, cost synchronization with the enterprise edition, but for just one time you can use the trial and then you can work with OED. So the border, the ETL, we have a GBC driver, you can connect any like Pentel or any tool that will be GBC. We also start by, if you want to do computation, you can start by. And we have also imported from Neo4j. So if you are using Neo4j, there is a tool that allows you to import database inside OED. Okay, so I know a few of you are using dbngdbngs.com is a website that has the ranking of databases. It's not the best database, it's just the most popular database, right? So we find a lot of position and since there is a category, we are in the data category. You can see that we have the same graph on the market today after Neo4j. The six documental base and key value and in the total ranking, we are the 42nd. You can see that there is also all of us access because they still have a lot of web pages. So they are really popular again. But this is our ranking and we are kind of in this position. So it's becoming really popular. On the press conference with technology from infoboard sorry, and other people. So if I interested to work on the review, we have a free training. It's a video course, it's free. Go to OED.com and start it. This course has been already taken by almost 18,000 people today. Maybe we reached 18,000 in this slide and it's now great. So if you want to understand if you want to learn OED, go to the getting started, get the video course. It's like a couple of hours maybe. Of course it's a basic course but you can understand all the features of OED and you can start working in the C1, the OED C1. For specific binding like juggle language, we don't have a video course but if your company is interested we can provide the training. Or you can just join a class where we are every month. As a company we provide 24% so if you want to do production with support, we can come in. OED is a new technology even though let the multimodern wave and now all the most popular databases, like most of the databases they announced to be multimodern by, who is using OED production today? Just a few of you that are maybe from Singapore could be familiar. So you can see maybe you are familiar with most of these companies. These are all the companies that are clients of OED. Since we are very business friendly so you can be a little free without any registration sometimes we discover the company that uses OED like every day. For example there we discover that that statistical tool is using OED and better. So all the statistical analysis tool is using OED under the roof. We judge the score but most of them are not actually other companies but we have NDA so we can do better over here. We are increasing our partnership because it's a new technology and one of the reasons why I'm here was I'm looking for partners so if you are interested in OED and your company could be a partner, maybe your assistant integrator you can join the partnership. In Singapore we have some users the biggest one maybe is NCS that they are using OED for a big use case and they are becoming partners but there is a hope for also small companies to work with us. So every time the partner is free the partnership so every time you we are a client in Singapore and you are certified we can provide you and we split the areas of the partner. So if you are interested don't need me to get my business card here and we can talk about it. Cool. If you want to know how to do the cloud we are actually the most used cloud platform with Amazon we have a special relationship because we are providing the ElasticSK OED is an enterprise edition but it's different because you can configure with the Amazon cockpit that when the OED faster CPU is more than 70% created your server and when it's lower than 40% destroyed server. So you can decide with the Amazon dashboard the rules to scale up and down with OED so this would be presented in maybe one month from now maybe less and maybe we will do the same for Microsoft at least our second platform. So when we created OED we didn't want to create yet another database to put in the not similar arena of more than 300 products we want to create something special that can be actually the replacement of a relational base or different databases at the same time. So it's not a database that you can use for you can use for analytics that's fine but OED was born to be an operational base and for this reason we were that this is not a database that can manage a pattern of databases in 2015. Only 3 tissues, sorry, so the best question you'll get a tissue I have all of these 3 signs just an L, sorry, XL Questions No, we have spotted JSON so as soon as you can read JSON and port it in the time series you will know what it means So we have spotted JSON in Graph.NET for Graph.JSON and we have on break out the next one very funny question on the our database it's a version it is a multibation concurrency control but we don't store it so it's up to you to create a model it's very popular with the Graph.NET base to create a version system there are also patterns to do in that way the language supporting that drivers or if I don't know Good question we have a native driver in Java of course but we have a dot-in driver Node.js, HG, Python and X here so we have the most common language we have a driver some of them are supported by the company some of them are just provided by the community so little by little we are adopting all these drivers officially so maybe the next one will be the Python driver we will not officially as a company but you may use all of me in so many languages you can go to the website I forgot about the teacher so what site and we use pointers pointers and you can do something like this this is a unique data and then there is a reference integrity like for a key you can do that you can use something to be sure that that data is just present once in database like unique index but for relationship we don't have for key we just have pointers so the other question is do you have some performance branch like transaction processing console something like that something like cancer.gov there is a search for the cancer they publish a batch of they use mycicle and even then after the same level of depth already it was more than 1,000 times faster not double, 1,000 times so the more it is composed of the query the more generally well the differences are astronomical so maybe two tables could be comparable maybe faster but we feel like 6 to 7 level of depth of traversing the performance so with mycicle it was more than 24 hours the query so it is much faster and you can find this natural on internet look for cancer or in mycicle you can even find a page or if you buy it to me I get a new link yeah good question how can you scale up with a multi-master it is very easy to scale up and wait because we have a simple page replicated one of the times so we have one of the servers to query so we have one of the times better performance the hard part actually is to scale up the writes so to scale up the writes you have to use a sharpening and all the queries they use are sharpening so playing with sharpening and the replication you can get the best performance because for example you fill one of the servers and you have the application majority means that every time they find a write I have to wait for 51 servers to respond and you can pay off latency but with sharpening you can decide that you can speed up painting 100 partitions and that partition is stored only on 3 servers so the majority is actually 2 of 3 is always 2 no matter how many servers you have so by playing with the sharpening and the replication you can scale up with the writes with the current list so it's up to you to decide when you write the data I want this partition but with the 3.1 we will provide the other one with sharpening with the microservices usually you have blank boxes so different databases but we usually use microservices on top of the same database we can provide 3-4 microservices they always on to the same database and this is not much faster than using different databases I have a question so how about the end game end game kind of concept like how do you create a golden network is it possible I mean you know what I mean because if you open source definitely you make a particular data to each of all those things how do you form the golden network is it possible here or can we configure that end game kind of thing so I understand so with all the source it's about the updates or the applications of your answers how do you create a it's like end game concept right like implementation when you have a data integration so it's easier to and supports the end game so is it possible I'm not sure to understand correctly but if you saw this is about the license oh no sorry sorry sorry sorry sorry sorry sorry sorry we usually need for a master and a master we have a couple of use case where you use all the data between databases I don't have answer to your question but we usually for example record a multiple feeling right so how do you identify which one is the correct data so for that we need some trust or we have possibility to define which column has to be taken from which source of the data. So, like quite frankly, we will form a similar record which is called as a golden number. So, to cleanse the multiple data from different sources. So, do you have the standard functionality in this thing? Okay, it's not in the database as my feature will be provided. But since you can work schema, you can put a lot of data and you can define then the procedure between the data. Between the data and then you can use schema. For example, we have always case when you just get data and store it on me. But after a while, you want to put a schema on top of it. And you can, even your data, you put a schema. And with only me, you can work schema-less, schema-full, or hybrid mode. So, with hybrid mode, you can define the name, sub-name, and the email in our laboratories. But these are things that can be created by other users. So, we don't have a specific feature. Yeah, but you can make it on top of hybrid mode. So, but I just want to understand, how can a real world work with more? Or is there any other, any scenario that is more suitable for, for instance, like, I think we are nice essentially for transaction one. So, what would this be? More popular for or more suitable for? Yeah. Multimodal can be used in different use cases. So, we are applying, they use OEB on top of the Raspberry BI devices, or multiple servers. So, the biggest installation, we can tell this in American economy, that they are more than one of the servers. But usually, a classic installation is a three or five servers, the OEB. The best use cases are all the use cases of graphs. So, for detection, recommendation system, or social network. So, when we have a lot of a national shift, usually, the graph that weighs out of the phone, any Russian topics. So, in essence, if I am thinking of a real-time, foreign street monitoring system, where I have a huge volume of real-time data, would that be a good fit? Absolutely. In this case, if you look upon Nuics, Nuics is an Australian company, but they have presence here. Yes, you can find this video, how they build this photo detection by using OEB. It's a complex tool, you can use photo detection or forensic analysis, and they are using OEB on it. So, look on, about the cases we got, so, yeah. Just now you showed elastic. So, just a question, does this replace elastic search? Yes. You can use OEB as, we have this image, we are using Lucille. So, elastic search is a solar, they use Lucille as an engine. So, we are using Lucille for full text, and it's spatial queries. But for other instances, we have our own algorithm. So, if you want a new full text, the search engine will have a way with this image. So, elastic search, how do you mean, you can use OEB in place of elastic search? Asset search is more focused on just in search. So, usually, just in the search engine, I suggest you to use elastic search. Like, if you would like to keep it, or just use OEB. But in case of graph, it's different. If you want a graph that weighs, I suggest you to use OEB, which is part of the new project, and it costs much less. But, usually, when we have a lot of complexity, it's where the multimodal and epsilon, like complex domain, with ground elements, full text search, all together. You mentioned earlier that, when you add nodes, you will offer a replica. But if you want to purposely see the data is in a different domain, are you able to do traversal across domains? That's a good question. So, in the game of graph databases, with the shuttle, it's findable again. So, with the shuttle, you have a different partition type. But, can you maybe, if you want to cross the database, and you have to maybe cross a lot of servers a lot of times, right? So, you can make a lot of that. In this case, the best way, and actually, it's the way, they call them graph databases. But, when for Jane, I'm doing that, with the shuttle, it's making the bracket ugly. So, Google created this bracket ugly. So, when you want to cross, like, one of the 1,000 edges, right? So, you don't cross the server per single edge, but you are batching the traversal. So, you are, like, grouping the edge of the traversal, and when you reach number 100, 1,000, you create a message, and you pass the message to the server, then it can continue with these 1,000 parts. It's just a way to, mind-in-mind, do the default between servers. The connection was simple. So, this argument is by Google. It's called right. It is a no-by-yes. So, this is the case where you have, let's say, these 20-part workers, and you are in support of them, and you are writing back, writing back to the 4MDB. Is there a transactional log string? How fast would the writing be? Because there would be, and then, as consistency, but already, and... Are there any logs for my experience that I, New4J, have to do workers writing to, say, you know, New4J instance, and it's mocking every time, in the transactions? Yeah, with New4J, your master's late. So, if you are going to execute the master, it's the bottleneck. So, no matter if you are one of the servers, the pooled server, the master's server, it's the bottleneck. Without it, every server is a master. In the right way, the partition, every node has another conflict with other servers. So, every server can write and replicate without any conflict. Do you have a mode where you can turn off the log to some kind of batch loop so that you can write quickly? Yeah, we have a SQL batch. You can do begin, come in, and a lot of statements, and this is more efficient. How does R&D be for offline mode? Offline. Some implementation can't work out of one to connect to the outside world or you don't want to connect to the public client. So, you want your own R&D self-cable quite offline from the internet. Yeah, we don't, even if it's a fissured request, it's a payout, so if you go on GitHub, you can find this request, it's very popular. We don't provide any support for this feature, but if you detach one server, if you isolate one server, you can still work with the other server and as soon as the server reaches the network, actually double the delta what happened in the middle. So, this feature is more like a real feature that we provide that you can detach and patch and you can work with the database. It's just that you can isolate servers when they match the class that they use in the mines. But if you want to consist a database, for example, we have five servers, right? And one server isn't isolated. If a point line is connected to that server, it can never patch because it never reached the portal, because the portal is four, it's three, sorry, to five. So, you can just read the database. So, we didn't affect your consistency, but if you relax the portal, it actually works. And in case of conflict, we have a conflict resolution that defense challenges, but you can find your own. Once it doesn't work. It's not officially supported, but actually it works. This space is to me like a new space. So, in terms of I mean, in this space, it looks to me like a new space. I'm not familiar with this space. But in terms of this space, who are your NIO competitors or, you know, is something for you that already needs to be to play in the mining world? Yeah. With the graph and the graph space, of course, it's Neo4j. It's the most popular graph database. And Neo4j is dual license. They are GPL and commercial. So, a lot of people think that Neo4j is free but it's not. So, if you're using for a commercial power, it's very expensive. Really expensive. So, we already used the competition for free for any user in a commercial. So, we get a different license, first. Second one, if you want it to be a commercial cost much less. And then, if you're using on the B, you have different feature like multi-master complex type. So, this is from the graph perspective. From the documental base, we have, of course, already we have the same feature, but we provide the impression sheet, which in documents, actually, is a new feature. And in general, maybe our main component is the rational base. Because we have a base of rational base. By the way, I'm very impressed with Neo4j for now. I hope you have a one win, one loss. I started this project in Hong Kong and now there is a company. The company is with the UAE. There is a team of people that work for the company and we have more than 100 people that work in the project and in the open source company. So, we are now ready to the company and we have a lot of contribution for the best to Github. We are very high people. We are going to Github ordered by one of the computers that we have called all this in Hong Kong. It's very easy to do. So, if I just go to the project, you can follow the lead, you can follow the project, you can create a special secret function and you can send it. We are very open to your contributions. The next one has a question for me. It's because you mentioned for each class there is a pointer to it, right? Yes. And say that I have Duca goes to Singapore. So, there itself has two pointers already correct. So, let's say when you scale to billions or even more than that, that means your pointer value is increasing, is that correct? So, if you scale, sorry, the last one. If you scale to billions and your pointer value is increasing or is it sequentially increasing? Oh, yes, okay. Yes, it's sequentially increasing but we don't recycle positions. It's not a real physical position. It's a logical position. So, we have an internal table that changes the physical position. So, we have, like, the five methods in development. So, when you create records, unlike before J, so we change, we define the database and we change the internal pointers. So, it's a physical pointer by a two-level or an interaction if you look at it that way. So, we can scale up, if you create and create data in the linear, you don't have, like, a database with a lot of holes. So, if this is your question. So, we can scale up pretty, pretty well with the millions of records. All you need is very far. You can insert one million vertices inside of them. So, it's super, super fast. It takes action. So, we actually will repeat them like this. We have to use them. I've got a question. So, can this be used to create a lazy computation tree? So, when you create graphs, the nodes you need to graph is kind of static. You get an example of a result of a section of a node near a node itself. Or would it be easy to use this infrastructure to create a lazy computation graph? For instance, if a certain computation is very old-time, you only want to compute it once, and only when the input flow has changed. Yeah, good question. It's really easy, because the natural of the graph can create edges between vertices that we are at one time. Maybe create different edges while you are computing the outer-person actually. So, for example, this is used when you want to maybe write from 0.8 to 26 steps, right? In this case, you want to create that connection between vertices. So, you can do that in real time, absolutely. Okay. But that's easy. You can't just quickly node and write a small javascript on the node. You can do a javascript. You can do that by creating a straight edge from the basic command. Or you can do the same thing by using the old-time. We also have tree views. So, the best way to do it is we call hooks. So, you can hook the database and get, for example, the fancy was flying. So, you update the data in a simple way. The tree of the metagraph of the time. So, here 12 months, every month, 51 days, every day half-twenty-four hours. So, you can go down the graph of the time and we do the same. So, in the hook, as soon as you update the vertex we catch the end and we update the end. So, you can do that. It's very easy. Much easier than a national case. I'm not familiar with the graph database but any graph could be the situation of cycle. How does that depend on in our idea? Good question. So, if you have a cycle, you can draw a circle, right? We catch the path. So, we never look at the same path. So, you can create a city graph that supports this path. So, we just cross the path just once. It's going to be on the path. But it's going to be on the loops. So, when we have loops, we found that we already get this loop. We just go to the next path. So, you can add a loop. I'm quite familiar with the graph database. I just want to know whether I can run a specific plan like I would do the GMS for the loop database. Can you give a question? No, I'm just asking whether I can run a specific plan like I would do the GMS for the GMS for the GMS for the GMS for the GMS for the GMS for the GMS for the GMS for the GMS for the GMS absolutely. You can go to explain, select whatever and look at the data provided in the preview. It's pretty much the same. In 2.2 it's not really a wish but in 2.0 we have three slides for a few days. We have the entire optimizer. You can see the optimizer in the path and you can actually see close to what pressure is on it. Thank you very much. If you have any other questions please leave them in the comments. Thank you very much.