 So, we start with a safe hardware statement which says that Oracle can change any date and stuff like delivering products and functions which can be printed in the contracts and sales and whatever. So, you will work with it. So, in this session, I will talk about the development of JSON support in the next MySQL versions and we'll cover new functions being added in MySQL's last version. It's aggregate functions and table function, a new table function. So, we added support for JSON, dedicated JSON support, not just storing them as the text, but better support in 507. There was added two storage options, text form and in binary form. Text is a set of functions, about 20 functions for dealing with JSON data. Text is stored as before, but now having JSON functions which can operate on it, effectively we was able to allow to deal with it effectively. So, JSON function can query data, validate JSON data. So, you can use the server itself to handle the JSON data in form of text and to achieve better performance and allow future improvements in handling JSON data. We added a binary form which effectively is a kind of pre-parsed text data which has a lens available, array sorted for faster lookups, have offsets and stuff like that to let the query the data faster. Besides that, the difference is that each time the server needs text data, it has to validate it first. It takes time. With binary data, it's parsed and pre-validated on the insert, so this phrase is skipped during read and processing, so it's somewhat faster only because of that. So, in 8.0, we added aggregate functions, JSON aggregate functions, which allows you to aggregate data into arrays and into objects. They behave similar to group concord, and well, it's quite obvious to use. JSON array takes value and single value and creates an array out of those values within a single group, while JSON objects take key and name and create objects out of group. So, it's easier to show by example. So, let's consider the entity attribute value scenario. So, here we created three tables as a product, attributes which could product and values of those attributes. Fill it with data. There are three products, the lamp, the laptop and the grill. Those objects could have some color, style, bulb type, usage CPU and so on, and values like black, green, plastic, classic and so forth. So, first we create a query that views which will return us the aggregated values of attributes. So, it's a joint between values and attributes, and group it by product ID. In this query we use JSON array act to create array of values. So, each attribute which will have a JSON array which holds the values, all values of the attributes. And it will return something like that. So, the product ID, attribute ID, name of the attribute and array. And first has two values and others one, only one value. But it's a matter of actual data. So, the next query which will actually create the JSON object describing the product itself. It's created by JSON-Pretty. Basically, it's also a new function which added a new version of the server which makes the JSON data created by those functions human readable. So, it's sincere for new lines and spaces, make a ladder of nested objects and so on. So, it's understandable, not just single line which you have to somehow struggle to parse. JSON object will create the object itself which will be returned by the query. And it returns the product ID, the name, manufacturer, price and specification. And specification object is aggregated out of the attributes owned by the product. So, we grow by product ID and all attributes are gathered into single object which will be inserted in the specification of the product. So, this is the result. We see that all those attribute values which returned by previous query inserted here in the specification. And it looks like actually a JSON data. So, we initially have three tables, three relational tables. This is the way to build some JSON representation already. Through aggregations and JSON object will just create and play an object. So, this is how they ought to be used. Any questions so far? What is the JSON object aggregate output? JSON object aggregate. JSON object aggregate, it's, here you see it's used for specification field. Yes, and it creates this part, this object. So, this is the five lines returned inside the group by product ID. And it accumulates over that group keys and key name and value into single object. Yes, and it creates the object. So, another function which used to opposite purpose, convert JSON data into relational one, is JSON table function. It's a new class of functions, a table function, which isn't used, isn't called in a warehouse or in select list or somewhere in other condition, in a condition. Rather, it's called from the, from class. And it returns to the table, which could be used as a regular table in any scenario that any regular table could be used. In a view, in a derived table, in a select, in a subselect, in a subquery and so on. In the stored procedures, there is no any specific limitation from that. It uses a source, it could use any type of JSON data. It could use static data. For example, you have a stream. You can provide it and JSON table will convert it into a table. It could use a field from another table. So, with this function, we introduce the material join. And it allows a function to refer to tables which are defined, which are specified in the from class prior to the JSON table. For example, you could write a from, from one single table join JSON table and then JSON table could take a field as an argument, a field from first table as an argument to feed, to get the data from it, to read the actual JSON data. And that field could be both text and binary, but same limitation applies on text data, that it will be validated, it will be pre-parsed and validated, so it will be somewhat slower than binary data. It supports extraction of nested data, so it's not just, we could take key value, like top object key value, but if it has nested data, nested arrays, nested objects with deep nesting, it supports up to 16 levels of steps. So you could write many... It could handle quite complicated nesting of the data. The expression, the JSON path expression, which I used for locating the data, so for each column you should specify where you take from. We use same JSON path expression that I used for all other functions, so in this sense it's unified with other JSON handling functions, which we introduced in 5.7, so there is no new, it should be already familiar, and this function could produce both SQL and Scala and JSON data in the sense that the result columns could be both of JSON type or any SQL type, or Scala type, so you could further process it with it, either in a JSON way, so you could take one JSON data from another JSON data and use further processing with JSON functions, or you could vice versa, or you could take JSON data to relational, to SQL types, and process through the access field data, as a relational one. One question? Yes. On the nested data, when we convert a JSON nested data, suppose I have just one level of nested nesting, how do I visualize it as a relational output? I will explain with example, so a little bit further than this slide. So the syntax is somewhat simple, but since it's recursive, it could end up with quite a tricky expression. So it takes single column as a source expression, which will be either text or variable or a field referring to another table or whatever, which returns JSON data. So you could, for example, take one field, construct it using JSON set, JSON replace or JSON insert functions, construct something new and provide that output to the JSON table. So there could be anything that could, basically, return JSON. The next one is string expression of path and columns class. The columns class defines what the result table will be consist of. It's a set of columns, each column, yeah, basically it's a list of columns. A string path expression describes which data will be used as a source for columns class. So you not necessarily take the whole document and provide it to the columns class to be parsed and assigned to columns, but rather you could extract only a part of it and process it. There could be four types of columns in total. First is name for originality and it used to count, to enumerate rows produced by this columns class. Each row has its own number. It's increased. The next one is the actual actual re-extrating column which used for extraction of the data. It's name of the column type which will be used, the result type of the column, path and with expression and two clauses on the other end and empty to handle data which doesn't fit into the schema which you tried to enforce on the JSON. There are two differences. When you use as a type of a column SQL type, it's considered to be a scholar. So if it's string, it will be considered as a scholar string so you can insert a rate or is a path expression this path expression points to an array it could be hand saved into integer or a string or whatever because the SQL type is considered to be a scholar, it could hold only that type. If you want to store an object or an array you have to use here a JSON type. It could handle any kind of data basically which could be pointed by the string path. On error and on empty plus plus they are used to define what to do when data doesn't fit into schema. So on error is triggered when you try to save an object for example into an integer or if there is an integer column and you try to save a string that can be converted into an integer so all these kind of things bring us errors and in this case on error plus is triggered. You could return now so now we need that in this column it's a default behavior or you could throw an error or you could use a default value so a particular value which you would like to use in this case and same on empty is similar but the difference is it defines what to do when there is no value pointed by this path expression. Next one, next type of the column exists it basically returns one or zero depending on whether this data under this path exists at all. And the last one, trickiest one it allows you is a nested path string path next column class it allows you to to unwrap for example nested columns into rows so each value of a nested or each row of an unrolled nested arrays into rows of each element on the array will be processed as a separate data source and could produce its own set of columns. It's easier to illustrate in an example so we take JSON product as an input it's an output of the query with JSON aggregate functions so the document looks like this and process it back into relational data using JSON table function so we call the function take data from the product column of this table define the source as a root document so we use the root document data source for data for columns the third column will take value of the key of the key name key the next title and price and define the column color will take values of elements of color arrays and each element will be each element this nested part will take value of array color and the color itself will take root of that expression so this array of two elements was converted to each row so this could be seen that for each nested class nested path class it produces as long as it produces rows the values of columns prior to it are duplicated so it actually it behaves somewhat similar to auto join so nested path basically will take set of columns and nested path is a way to join data join columns of the top class and with nested path next example it will be this is how this unnested path so it could be deeper and deeper and deeper so each nested path will add another level of auto join the next example it will be more clearly seen that I added here another view and another table which returns this kind of data so it same returns key and price for each product as in previous examples but instead of color and attributes it returns store object and each object has showing that there are five products in shop and 20 products in remote store and the last one is only two in shop and for the last one there is nothing in store the query that uses this data is looks like it's similar but we have two nested paths in order to see which in order to show which store how much each store has how much objects how much products each store has in its own color here this path expression shows that we take from object store value of object shops inside the object store and here is the remote store inside the object store both codes will take the root value of this object so it doesn't unnest it further so this is the result and it shows that for the first nested path it produces only one row and for the next nested path there is now the reason is that the case when there is two nested path closers one after another those nested path closers are called seeding closers and it's possible that only one nested such seeding nested path are produced in records so when one finishes the seeding table starts producing that for the next nested path so it goes one to another and the result is that in total number of rows produced by nested paths the sum of each nested sum of rows of each nested path closers but not the product of it so it's unlike the join which will produce product here is only one by one so here the first nested path produced one row with value 5 so the next one is now so it's set to now would be the third nested path class it also will be set to now then the data switch to next nested path first one is set to now and next one shows produced only one row 20 same for the third record and for fourth record third product and fourth row data is all for the nested path so it was produced now but the records itself still insert also here is visible that for the identity code ah it's not a it's a but identity is the same would show the same it enumerates number of rows in the codes class so first is the topmost one will produce only one record this one this club with price 26 so both rows are numbered with number one and those rows are produced by different nested codes so they are not enumerated further so top one produced only three rows and second and third this is how the identity is used to identify rows in the codes class well this is it with json table somewhat what could be 15-20 minutes any questions? can I use this brick and json function in negative stream? yes as far as I understand they could be used because they do not you can't insert that they need you can't update it so it should be safe I guess it's like complete table or derived table if derived table allowed I don't remember I think derived tables are allowed inside the trigger so it should be similar to be used in the trigger but the rest of the limitations I think they could be used any more questions? general question the code which passes the json is written by minuscule or we use the third part the parser which passes the json with json basically when you write a query and you for example if you have some json data that you want to be processed with json table for example you somehow get this json table into a standard either you use a text representation inside the query so basically you select stars from table quote and json quote and so forth and then this is the text json table will parse it converted into internal presentation you see to reduce the table either you write it insert create a table with json code for example let's say json binary code quote json quote the server will process it parse it insert into binary form into the table and then you write the next star from json table that table comma json table and the field of that table was the first table and it will use binary representation so you don't have to basically you can't use anything outside of the server to parse that so the server will do parse and validate data and so anyway these are other ways so at some point any other questions then this is it thank you for the meeting