 We'd like to thank you for joining this diversity webinar, hybrid data models, relational plus JSON sponsored today by MariaDB. Just a couple of points to get us started. Due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we will be collecting them by the Q&A in the bottom right-hand corner of your screen, or if you'd like to tweet, we encourage you to share highlights or questions by a Twitter using hashtag dataversity. As always, we will send a follow-up email within two business days, containing links to the slides and the recording of this session, and any additional information requested throughout the webinar. Now let me introduce to you our speaker for today, Shane Johnson. Shane is the Senior Director of Product Marketing at MariaDB, and prior to MariaDB, Shane led product and technical marketing at Couchspace. Before that, he performed technical roles in development, architecture, and evangelism at Red Hat, specializing in Java and distribution systems. And with that, I'm going to turn the floor over to Shane. So hello and welcome. Thank you, Shannon. Can you hear me clearly? Yeah, you're sounding good. Perfect. All right. We'll go ahead and get started. Excuse me. The topic for today, Relational and Jason, it's something I've been around for quite a few years now. But before we get into some of the examples, I had a few slides to share with you just to help you understand my point of view on this and maybe lay down a little bit of context. Personally speaking, this statement couldn't be truer. I pulled this from myself. I absolutely love breakfast. It is my favorite meal, pancakes, biscuits and gravy, bacon, you name it. I would order everything off the menu. I could. Unfortunately, I'm also not a morning person. I get up just in time to shower and jump in the car and head to the office, which presents a little bit of a conundrum for someone like me. And the reason I kind of put that out there is the same could be true when we're talking about Relational and Jason. You know, you could have the perspective, I'm not a no sequel person, but I'd like Jason, right? I know that there's benefits of both. And you're kind of stuck in historically an either or situation, right? I'm going to use a Relational database and very structured data or I'm going to use a no sequel database with semi structured data. But ultimately, I'm not too sure if it's a choice that you should have to make. And then I'm thinking about this a little bit further. And if you're watching this, you probably know where I'm going. Yes, fried chicken waffles. Fantastic. If I want fried chicken, I'm going to go to KFC. If I want waffles, I'm going to go to IHOP. That is traditionally how I approached this situation until some number of years back. I forget how long ago it was. I went out to lunch with my family and we're at a restaurant and lo and behold, there I see on the menu fried chicken and waffles. And my mind exploded. I just didn't think that this was possible. I just assumed that you had to specialize in one or the other and wherever you went, you could pick one or the other. But like so many other great things out there, people realize that you really can have the best of both worlds if you approach it right. And it was amazing. Everywhere I go, I'm checking to see if they have fried chicken and waffles on the menu. And I say this because this is how I start to think about relational and JSON. That it shouldn't be a choice. That there's great things about these two models and you should be able to mix and match them as you see fit. And then my final example here, and I'm pulling from my background because I love all things science fiction is serenity. You know, this was a great movie, Firefly, if you saw the TV show before that. But it's particularly interesting because you wouldn't normally assume that you could mix science fiction with elements of a Western and pull it off and actually be this amazing, great thing. Plenty more examples where that came from if I think back to alien. They're mixing science fiction and horror with amazing success. So regardless of what our assumptions might be or the choices we had to make in the past, I would like to throughout this webinar reinforce that set those notions aside for a minute and have the competence that, you know, what you can have the best of relational and the best of Jason and be successful. And that's what we're getting at. I think as we look throughout the examples in this webinar, a modern data model almost certainly should have elements of structured and semi-structured data. Yes, you could take it all the way to its extreme and be completely structured. You could go all the way to the other end of the spectrum and have it completely semi-structured. But I think compromise and balance are the key words here and that's what we really should be striving for is a little bit in the right amount of each one and good reasons for that. From a relational perspective, these great things have been around for many, many years. For decades we have come to expect data integrity, whole transactions, multi-right transactions and of course reliability. We choose relational databases because we trust them with our data. We are confident we can run mission-critical applications on these databases and they're going to be reliable. They're the rock, the foundation. But on the other hand, Jason has become the de facto standard for web and mobile applications and for good reason, the flexibility is great. I also would argue that there's this element of simplicity I can look at a Jason document, it's very easy to read, it's very easy to understand, it's self-describing and of course that ubiquity. So I don't think Jason is a thing that those of us that might come from the relational world can ignore anymore. That point has probably actually already happened. So I think we have to recognize that there is a role for Jason even in relational databases. So I start talking around things like structured and semi-structured. There are certainly a variety of definitions out there. I think people are going to have different ways of describing them. Here is how I think about it. Structured data is externally described and for that means this notion of a separate schema. That schema is in a sense a separate physical thing. It tells you what your data looks like. For me, semi-structured data is self-described. So whether you're looking at a Jason document or an XML document, that document not only has the data, it has the metadata and it tells you what the structure is just by looking at it. So as we go throughout it, that's how I think about it conceptually and hopefully that's helpful to you as well. So we take another step. We're diving into our relational database. Typical role here. For all of these examples, I had in mind an online science fiction store. Sci-fi movies, sci-fi books, sci-fi games. So as we go throughout these examples, that's what you're going to see here. Starting with a movie. All of our products, they're going to have a name, a format, a price. But we've come to the conclusion that the video, those attributes are probably going to evolve over time. So you made a constant decision to store that as Jason. So when I start to describe this model as being both relational and semi-structured, it is this, right? Those first three columns, those are structured. We have a schema that defines that data. That last column, that's our semi-structured data. We're going to have to look into the data to better understand what it looks like. Or another take out it. Maybe a little bit more practical. Is it in a sense that schema really defines all of those columns? Of course. But if you're going to jump into that video, you're going to have to look at the data to understand the structure of it. But as we can see here, even in the very beginning, we can put those things together. So we'll start through our examples here. This first section is really kind of the basics of working with Jason documents in MariaDB, largely around creating those documents. And once you've created them, how do you query them? How do you get that stuff back out? So first and foremost, we're starting to put together this table that we just showed you. We have our traditional ID type, name, format, price columns. And then we created another one called ATTR. In retrospect, probably should use attributes. The rest of this webinar, I'll just say attributes. We define it as a JSON column that's not null. JSON is really an alias that does two things. It's going to store this document into a text column. It's the actual type. And it's going to apply a checking string that validates the JSON document. So underneath the scenes, we're storing it as text. And we're validating that it's a good JSON document. So you can have that confidence from the beginning that you're not storing malformed or improper JSON. It is checking that for you. From inserting, nothing unusual here. It's just a string. And you pop that JSON document as a string into your insert statement, and you're good to go. Nothing too unusual. There's two examples here, only in sense that this products table, because of the flexibility provided by the JSON column, can store both movies and books. They're going to have different attributes. So you might normally think that they should put in separate tables, but it's not required. And while my example is fairly limited in scope, you can imagine if you were a big box retailer that your online catalog could have hundreds, thousands, many thousands of different types of products, and those products having many, many, many different types of attributes. So being able to pull them together is pretty nice. Our first function comes from getting the data out. So I didn't mention this earlier. When we talk about querying JSON and RedB, we're using SQL functions. So you're still going to use the SQL that you know and love. We're just sprinkling a few extra functions on there that are specific to working with JSON. And that first one is JSON underscore value. Two parameters. That first one is tell me what column this JSON document is stored in. That second parameter is tell me the path to the field you want me to get. And so as we start to do a few more functions, keep in mind that JSON underscore value explicitly is designed to return a scalar. It will return a Boolean, a string, a number, those types of derivatives. So in this case, because I just want that aspect ratio, and I want to ignore the rest of the JSON document, I'll use JSON underscore value. The other question I sometimes get is what happens if that element doesn't exist. In that case, we just return a null. Nothing too fancy going on here. But when I've done this presentation or talked about this with people before, the initial questions are, well, you have JSON documents. They're going to be a little bit different. If I'm querying an entire table, what happens if I'm looking for an element or field that doesn't exist in those JSON documents? Completely okay. It returns a null. So in this case, I only have a couple of products in my table. One's a movie. One's a book. They clearly have separate attributes. And books, of course, do not have an aspect ratio. But that doesn't stop me. The next one, part of what we want to get at here is that these JSON functions aren't limited to the select box. Any word that you can use, SQL functions, you can use a JSON function. So we kind of toss in another one here for JSON contains path. And what we're trying to do in this particular example is say, hey, I want to select all movies that have an aspect ratio field. I'm not necessarily concerned about what it is. Although I do have that JSON value in there to get it. But what I'm really doing is making sure that it at least has it. And so it kind of turns my results there a little bit. And of course, I checked to see one because that means the path is there. Zero, that means the element is not there. And then I also put in another slight variation here. In this case, it's not so much that I want to make sure that it has that aspect ratio field. What I really want to do is see if in that audio array, there's an element that has DTS HD in it. So I'm digging a little bit deeper into my JSON document. I'm not checking so much for the existence of a field but that field, whether it is an array or an object that's somewhere inside of there is that exact tax DTS HD. And if it has it, then let's go ahead and return all those results. Where things start to change a little bit is JSON underscore query. So as mentioned before, JSON underscore value is going to return a scalar. So JSON underscore query is going to return arrays or objects. In this particular example, I want the audio array. And so that array has two elements. It has DTS HD and don't be surround. And that's why if I flip back here for a minute, I said, hey, give me something that has DTS HD. And of course it returned aliens because it has DTS HD as one of the audio options. So the difference, JSON query, JSON value just depends on what you're trying to pull out of that JSON document. Or another way to go about it is to actually check or pull out a specific element within that array. And we can use JSON value for that as well. So instead of just going to the audio element, we'll say give me the first element in that audio array. And we'll call that our default audio. And so it might be aliens, DTS HD is the default audio format. So those paths, they can point specifically to an element, whether it is an array, a string, an object, or in the case of arrays, you can certainly use the index position to work with a specific element in that array. Or conversely, we can simply point to an element like the video and actually return an object. So when working with JSON documents, you can have objects contained with objects contained with objects. Same thing with arrays. But in this particular case, in the audio element, we chose to use an array and simply list the audio formats. Video is an object that's a little bit more complex. I think that one is a collection of attributes. In this case, both are resolution and our aspect ratio. The other one I wanted to note is that a lot of these examples assume you have one column for JSON, and you're pulling out maybe one object or one field. But you can certainly separate them, tease them apart. We have a big JSON document here, but we're going to return the name of this movie, an array for the audio, and an object for the video. You can do that as well. Of course, we can use it in our where clause too. So in this particular case, we had a couple examples before where we said, does it have a path to a particular field? Does a field contain a particular value somewhere inside of it? Or in this particular case, you could drill down into that video object down into the resolution field and make sure that that matches 1080p and return your results. So just kind of remind you that these functions could be used anywhere within our SQL query. Most of my examples are probably going to be in the select clause, but you can use them anywhere. The next question I often get is around indexing. With very structured data, we're probably going to create indexes on different columns that, of course, is going to give us good query performance. What are we going to do when it comes to JSON, particularly the examples that I just went through? We're looking for the aspect ratio. We're looking for the resolution. What are we going to do about that? We can do it. It's just a two-step process. So one of the things that we have in ReaDB is what we call virtual columns, or sometimes they're called generated columns. So our table has a predefined schema. We can add a new virtual column for the resolution only. So we would end up with name, price, format, attributes, and then this fifth virtual one for resolution. And we call it virtual because we're not going to store that with the row on disk. It could be persistent if we thought that was the right approach, or we can simply call it virtual. So we've added this column, kind of do a little explain on a simple query here that uses that resolution, but we'll see that if you look in the last column, there are possible keys that was null. So we don't have any index that's going to help us out with this query. But if we then create an index on the virtual column and try that query again, it'll use that new index that we just created. So the short of it for indexing is that if you have predefined queries that are looking for certain elements within that JSON document, you can create virtual columns for them and then create indexes on those virtual columns, and you're going to speed up your query performance. I would use that judiciously. Certainly not intended to create a virtual column index on every single field within your JSON document, particularly if you have a very large JSON document. But in cases where performance is important and the query is common, then yes, there is a way to index just about anything within that JSON document. So we talked a little bit about creating those documents, getting them into the database. A handful of JSON functions for then querying them. That leads us to the next part, which is how do we go about modifying what's in those JSON documents. So a couple of ways to go about it. One thing that we could do is we can use JSON underscored insert to create a new element. So in this case, for movies, once upon a time, you just got a single DVD. But now if you're looking at TV shows and complete seasons and various bundles, you could have multiple disks. And so at some point as our online sci-fi store owner, we decided we needed to keep track of the number of disks in there. Not a problem. We'll just have three parameters there. Where's the JSON document? Second parameter, what is the path that you want me to insert this new field? And then that third parameter is the value of it. So as we start inserting movies, we can start inserting the disk as well. It's slightly different if we want to insert arrays. So JSON underscored insert much like JSON underscored value for working with scalers. JSON underscored array for inserting arrays in the same way that we use JSON underscored query for returning arrays and objects. So in this case, we're going to nest them together. So on the third line there is our JSON array function. Nothing too out of the array here. Just specify the elements in that array. It'll convert it into a JSON array. And then we wrap that in the JSON insert to say, hey, take this array. We just created it and put it into an element called languages. And that's how we'll go ahead and we'll update existing JSON documents to insert this new languages array. And then on the other side you see the matching JSON underscored query now pointing to this new languages array and returning the value for it. A little bit different. Once we have an array, we can certainly add elements to that array. In this case, I'm going to use JSON underscored array, underscored append with parameters that are probably starting to sound a little bit familiar now, which is what column has my JSON document. Where is the path to the array? And what is the element you want me to append to it? There are a few other JSON array functions as well, depending on where you want to actually insert the element within that array. But in this case, I just want to insert it at the end, so I'll use append. I think on the other side you'll see JSON underscored query pointing to that new, pointing to the existing languages element, I should say, but with that new Spanish option in there. And for remove, we have JSON underscored remove, pointing to the column that has our JSON document with a path that includes an index position in it. So we want to remove the first element in that language as a way for this movie. And then if I go around and query it, we'll see the English has been removed from that array. So we talked a little bit about inserting JSON documents, a handful of functions we can use to query them. Then we transition to a few more functions that we can use to modify that data, whether it's an object or an array. There's one more way that I wanted to look at it, and that is a couple of things you can do with respect to the model. You can take a very big JSON document, use SQL and these functions to turn it into something that resembles rows and columns, kind of deconstruct it a little bit. You can also take existing relational data, grab those rows and columns and turn them into JSON documents. So the use of these SQL functions kind of lets you shift around. These first examples are intended to show kind of combining them. If I were developing a new schema today or starting to extend an existing schema, I can use these functions. Conversely, if you have a very strong schema in place, but you want to introduce JSON, you can also use these functions to convert that data on the fly into a JSON document for the consumer. So we'll talk about that here a little bit. If we look at the structured columns that we've been talking about so far, name, format, price, we created those on our schema early on. We can use JSON underscore object to ball them up and return them as a JSON document. So if you have web developers or mobile developers or maybe someone creating a set of services, they might be querying your relational database while returning JSON to their consumer. So something like this can be very helpful because they don't have to so much change the schema, but now they can use this JSON object to simply convert it to JSON right away and let it move through the layers on out. One thing I'll note about JSON underscore object is it's really intended for, I would say, something of a flat JSON document. So if you have a table with a whole bunch of columns, this is great at taking a single row and moving all those columns into JSON document works well. The other thing we can do is start to merge it a little bit. So in our previous examples, we were kind of pulling different fields and elements out of this JSON document and returning it with our traditional columns. Another way to go about that is we can use that JSON object value, which still has our name, format, and price, but we can use JSON value nested within it to pull a specific field out of that JSON document. So when I think about this, if we were looking at our table in the database, we have these fixed columns, our name, format, and price, then we have a bigger JSON document with a whole bunch of stuff. We can take one or more of those existing columns, one or more parts of that JSON document, and assemble a brand new one to give to the client, which could be very useful as well. So in this regard, you still get to keep what's structured while attempting to return that in a semi-structured format and cherry picking a few things from your JSON document. The final example is if you really want to do the full monty for lack of a better term here, we can use JSON merge. So that JSON object function kind of nested in the middle will take name and format. It's going to produce a fairly flat JSON document. But we can merge it with the entire JSON document in that attributes column and create a brand new JSON document. So in this particular example, you're taking the full JSON document plus one or more columns in that row and you're turning them into one big complex JSON document that you can return to your client. So I think we talked about this one for a moment. The real value here is that for those folks that are in the upper tiers building web apps and services, they have the ability to consume everything in your database as single, wholly contained JSON documents. While at the same time, you can keep your schema fairly structured for all those reasons we talked about earlier, data, integrity, transactions, joins, all those things are maintained, but you can simulate the JSON to those clients, almost as if you're working with a document database, which leads me to one of the final things that we'll discuss here, which is data integrity. So one of the things that we often have to give up if we're moving into a no SQL type environment is the data integrity, often to varying degrees, depending on what you're looking at. But for those of us from a relational background who placed a great deal of importance on data integrity, you either have all of it or none of it. And so if we're going to begin adopting JSON, we still want to give it up. We still want it to be able to make sure that those JSON documents are valid, not just valid structurally, but that a phone number actually looks like a phone number or a field that's supposed to store a number actually stores a number. And so how do we go about that? We would do that with check constraints. RedBee server 10.2 introduced check constraints. And what we could do, this is a particularly complex example, but you see we're going to add a check constraint on this products table. If it's a movie, we have a set of additional constraints. We might say that the video element should be an object. Cuts should be an array. Audio should be an array. And then as we move a little bit farther, we should say, hey, it has to have a resolution element. It has to have a aspect ratio. Those are required elements when adding a movie. Down to things like the cuts in the audio where, yes, they have to be arrays, but you know what, they actually have to have elements. There's going to have to be a default audio format. There's going to have to be at least one cut, the theatrical cut, the director's cut, whatever it might be. So this starts to give us a little bit more control. So that's not exactly the wild, wild west out there. That as developers are adding data and using JSON documents, we can go back and say, hey, no, at least we know that they have the proper data. That someone didn't add a movie without adding the audio or the resolution, making it challenging elsewhere. So we can do a little bit of that. And so what happens? In this particular example, I know there's kind of a long JSON string in there. But if you look in the video object, which is kind of in the middle there, we see it has that aspect ratio. What it doesn't have is the resolution. And so if someone tried to insert this data, they would get an error. And it would tell you that the check constraint failed because we were missing something. Conversely, this one, if you look at the far end of the third line, that disk's element is supposed to have a number, one, two, three, and so on. I made up an example where someone mistakenly put in the string one. And so again, we get an error back. I put this one in here because it's a little bit difficult to see what's going on. I think there's something we need to clean up. But what's happening is that check constraint function is airing out because it's trying to match against a number. And it's not. So the JSON type was wrong. So I think it would be a little bit clearer if it said it was a constraint violation. I think that's something we can fix pretty easily. The point is that even if the error takes a little bit of work to decipher what happened, what's really important is that we're still stopping people from inserting bad data. We're starting to control it with some constraints around it, some data integrity, and make sure it's reliable. And then there's one more thing I want to get into, and then we'll have plenty of time for questions. And that's this notion that ultimately we may use a JSON document because it provides a great deal of flexibility, especially in the beginning. If you're developing an application, you know, you're iterating through something, you're adding fields, you're changing the types, you know, kind of moving as quickly as you can until you get to that point where you're happy with it. JSON document is going to be great. But at the same time, if we're honest with ourselves, it probably doesn't make sense for some of what's in that JSON document to stay there forever. Previously, customer profiles were very common in document databases and because those customer profiles were evolving. But if you think about your customer profile, certain stuff is pretty fixed, right? Your name, your address, some of your preferences. At the time, when social media was coming, what it is today, it was very quickly, oh, we have to get their Facebook, then we have to store their LinkedIn and their Twitter. So we know that there's still going to be some evolution. Well, let's be honest with ourselves and that once we get past that development and that iterative process in the beginning, from a long-term point of view, some of that probably could become structured together. And so as we look through a couple of small examples here, we're storing video in the JSON document. It's giving us a little bit of flexibility as, you know, these video formats have more attributes and become more complex. That's great. I'm developing pretty fast. I might decide to create a virtual column for the resolution. And that virtual column, of course, is just using a function and pulling out that resolution field from the document. So we've talked about this. We're still doing normal. We might go a step farther and ultimately turn resolution into a normal structured column and actually remove it from the JSON document. Well, at the same time, realizing that, you know what? Every single movie has an aspect ratio, always in the same format or, you know, something, colon, something, some number to number. Let's pull it out. Let's create a virtual column for it so we can start standardizing on using it as one of our structured columns. And then we iterate a little bit farther. We get to a point where, you know what? It makes sense. We're just going to remove ratio from the JSON document. We're going to store that in a normal column. But as we realize from the very beginning, these things are going to evolve. And sure enough, 3D becomes a thing. So now all these movies that people can buy, all the new ones are coming out with options for 3D or not. And so one of the things I want to kind of sneak in here as a little tip is that when we're creating our virtual column, we can also use a number of functions there. So we could just point to that video 3D field, but we can also wrap that in an if null function so that we can turn the value if it exists. Or if there's no field, we can just assume that this predates the 3D era and that it's false. And so I think it's worth understanding that we should combine relational and JSON. There are benefits to both. We should recognize that by having JSON, it gives us the flexibility to iterate and to evolve fairly quickly. It could also be cognizant of the fact that some of that data post-development might make sense to pull back into our structured schema and use it there. But, you know, today it's 3D. Tomorrow it's 4K. Who knows what it will be after 4K? But because we have this attributes column, we know that we're going to be able to cope with these changes as they happen. So happy to take some questions now. I know I moved through that fairly quickly. But feel free to, you know, ask whatever's on your mind, whether it's related to JSON, 3DV, technical or non-technical, we'll do our best. Sorry, Shane. Just trying to get myself unmuted there, that darn mute button. It is, you know... A follow-up. But we have lots of great questions coming in. And just to answer the most commonly asked questions, I will be sending out a follow-up email by end of day Thursday with links to the slides and links to the recording of this session. And so, Shane, just diving right in here, you know, do you know if MariaDB is supported by Power Designer? I am not off the top of my head, but it's certainly something we can go check and get back to online. My guess is that if it supports most of the common relational databases that it probably does, but we should look into that to make sure for you. Sure. And for one-to-many relationships, how querying is supported to extract arrays of a single JSON column and how result is that would look like? That's a really good question. My interpretation of that is that we're doing a join, which typically is going to produce, you know, every combination, then you're asking me about the pulling the array out as well. If I'm understanding correctly, it probably means that that array is going to be repeated in those combinations, unless what you're really trying to get at is to pull those individual array elements out and create combinations out of the array elements. Off the top of my head, I'd have to try to make sure, but I'm not sure that we could pull out the individual array elements and treat them as separate rows just yet, but I think it's something worth exploring. And if I didn't quit that quite right, please do follow up with a question and we'll see if we can zero in on that better. I love it. Thank you. So, so many great questions coming in here, too. You know, the JSON field is validated to ensure that it is a valid JSON document. I assume that means it is well-formed JSON. Can you also validate the JSON payload document against the JSON schema instead of the check constraints? No, so we don't really have, I think that's the best way to answer it. There isn't a separate schema for the JSON document, so to speak. Those check constraints are kind of our way of applying a schema to the JSON document. I know that there's different solutions out there for handling validation of JSON documents, but you're spot on in that the data type constraint is just making sure that it's well-formed, as you said. And if you really wanted to dig into the payload or the exact values, check constraints would be the way to do it, right? So, are there some general guidelines on what should be in your relational database and what should stay in a JSON document? Yeah, so I think probably the first thing I would say about that is that, of course, you know, in the context of this presentation is that there's room for all of that in your relational database. But if we kind of take it a step farther to what should be stored in defined columns and what should be stored in attributes, I think that's really going to come down to the use case. I think looking at the use cases, trying to understand where parts of your data model are going to evolve a little bit faster than others, or where you're going to see more variation. So I think, you know, if I'm looking at customer profiles, I might say that the preferences can evolve over time. A good example is GDPR, impacting a lot of us right now. So where we didn't have it previously, we will now have an option checkbox asking people to opt in or to opt out of things. Or as you add different newsletters or different means of contact, I would say something like that might work well in a JSON document because you know that over time it's going to evolve and that as it does, different customers are going to have different values and different fields. So that works really well. It could work really well in a product scenario too, where, you know, shirts come in different sizes, small, medium, large, tanks come in, you know, is it 30, 32, 34, etc. Different colors, shoes have their measurements where, you know, instead of doing an approach where you have a single table per product, you might actually be able to pull them all together and put the attributes that are a little bit different or varied into the JSON document. So I think I would approach it kind of by use case based on those two things, how much do the attributes vary between the different rows, and how likely is it that a particular object, for lack of a better word, is going to evolve over time and how much you want to build in support to make that easier for you. But even beyond that, I would say that some of the real value in SQL and the JSON functions is really to almost artificially create it. You could have super structured data and use those functions to create complex JSON documents and give them back to a client without ever actually using JSON yourself. Or you could have tables that have very big JSON documents, but create queries and essentially return them in a structured format. So you have that flexibility to do what you don't. While I didn't cover it here, I might even suggest the use of views. I think we had an existing schema. We could add some new views to return that data as JSON documents. Or if we were building something new and lean heavily into JSON documents, you could always create some views that make it look like traditional rows and columns. So I think it's all about finding the right balance based on your requirements in your use case. Love it. How can we find out what field names have been created in a JSON column? Oh, that's a particularly good question. I put that in the stump category because I don't have off the top of my head a very easy way to do that. At least not in a way that would be very efficient. But now that I've heard it, I'm going to go back to our team and talk about it some more to see if we can build support for that in the future. So I think that's a really great request. And now that we've got it on top of our mind, I can do a good job of passing it along. Hopefully the next time we have this chat, I can give you a much better answer. I love it. And Shane, anything that you want me to shoot out to in the follow-up email as well, just you can get that to me. And I'll get you all these questions as well so you can see that. And I love this question. This is a great question for you. So where can I see a demo with data to test? That's a really good question. One of the things you could do for a demo is, there's a link here on this slide, but if you download the white paper, I have a set of examples that you can copy and paste. So if you install MariaDB and just open up that command prompt, you can literally copy and paste the examples and they go all the way through inserting the sample data to query it and adding the check and strength. Pretty much everything you saw here today can be copy and pasted and actually run on the command prompt. All right. And if you have any more questions, feel free to submit it in the Q&A. I'm kind of going through the chat here. There's been some great conversation going on in the chat. I'm just looking to see if there's any additional questions for you directly. Do you happen to know, Shane, is there a JSON equivalent to an XML schema definition? Yeah, I think that's a little bit similar to a question someone had asked earlier about the schema constraint. So we don't have an external schema, so to speak, for JSON, similar to what you just described with XML. So it's really the check and strength at this point in time. But it'll be interesting to see how that pans out as more databases adopt JSON to various degrees. Can we begin to standardize on a way to better apply schemas beyond just the check and strength? So great question. Something I think we have to noodle on and probably work together with others and find out what's a good way to move forward with that. So regarding one of your last examples that you were demonstrating, what would happen in the example if the resolution didn't exist in the JSON document? Let me go back to the check and strength here. So in our case, we had fourth one from the bottom. It's a JSON under a score exists according to that video resolution. So if that didn't exist, it would have returned to zero and my check and strength would be violated and it would return an error. Nice. So we built a JSON schema to validate all of our other JSON payloads. Is this a comment there? Do you want to add to that comment? No, I think what I'm hearing though from a handful of folks here is kind of the use of JSON schemas to help with integrity and validation. So I think that's something we should explore here at MariaDB as we continue to invest more and more in our JSON capability. So I do appreciate the chat in the comments because it gives us a better idea of how you want or how you expect to use JSON and the more we know that better we can support that. I do love how active our community is. It is helpful. So how JSON type, data type is internally optimized by the database engine? That's a great question. So when we started this process, the two options are really to store it as plain text or store it in some sort of binary format. We experimented at first considering the binary format, but we got better performance from the text format. So I would say for the time being, we will probably continue with the text-based format as long as providing the performance that we expect and want to provide. But as the capabilities grow, as the requirements increase, there is value in binary format. So I think it's something we will continue to keep an eye on. And if we get to a point where what people are asking to do with JSON requires the use of a binary format, then I think we'll go in that direction. All righty. Don't show any additional questions at this time. Well, I hope everyone found this very useful. Someone asked about the demo. So if you can, do check out the white paper and try to run those examples yourself. There are a lot more JSON functions than I covered here today. So we'll put it in another hour if we want to go through all of them. But there's quite a few there. Probably, I'd say, around two dozen different JSON functions for SQL. So play around with it. And as always, this is part of the community process. Let us know what you're trying to do and what you need. If we don't have it, that's a great thing about open sources that we can work together and get it in there for you. Well, Shane, thank you so much for this great presentation and thanks to our attendees. Like I said, I'll shoot you over the comments in the chat. So if there's anything extra you want to add for the attendees and the follow-up email, which will go out by end of day Thursday with links to the slides and the recording as well. That'll be great. And I hope everyone has a great day. Thanks so much. Shane, thank you.