 Hello and welcome, my name is Shannon Kemp and I'm the Chief Digital Manager of DataVersity. We'd like to thank you for attending Database Now Online, the first occurrence of this online conference produced by DataVersity. We're very excited to kick off the event and have a great lineup of sessions for you today and of course a special thanks to all of our sponsors today who helped make it happen. Just a couple of points to get us started. Due to the large number of people that attend these sessions, you will be muted during the event. For questions, we will have a short Q&A at the end of each presentation today and we'll be collecting questions via the Q&A in the bottom right-hand corner of your screen. Or if you'd like to tweet, we encourage you to share highlights or questions via Twitter using hashtag DBNOW. If you'd like to chat with us and with each other, we certainly encourage you to do so. Just click the chat icon in the top right-hand corner for that feature. And for this event, we will send a follow-up email next Monday to all registrants containing your unique login to access the recordings and the slides from today's presentations. Now I'm going to introduce to you our third speaker for today, Karen Lopez, who will be discussing surviving as a data architect in a polyglot database world. And just to give you a brief background, Karen is a senior project manager and architect at InfoAdvisors. She has 20-plus years of experience in project and data management on large multi-project programs. Karen is a popular speaker known for her highly interactive, practical, and sometimes irreverent presentation style. She has a Microsoft SQL Server MVP and she wants you to love your data. And with that, I will give the floor to Karen to get the session started. Karen, hello and welcome. Hi, Shannon. Thanks so much for having me. You rock. So in this session today, we're going to talk about very quickly because there are short sessions about polyglot database designs, which I think is a highfalutin word for things are changing in the database world. As Shannon mentioned, I've been doing this for a while. I mostly talk about data modeling and data-driven methodologies. But because this is database now, we're going to be talking about some physical implementations of a good variety of database types and frequencies. I am on Twitter, so I'd love to see you tweet these things. I won't be able to see your tweets this time during the live thing, but I'll definitely go back through on Twitter and take a look at all the wonderful questions or follow-ups that you have. One of the things is, I like to talk about what the outcomes that we should expect for this. I want you to take away from this why multi-model? And one of the hard things about being a data architect in this database world is they also refer to the different types of database structures sometimes as data models. So there's a key value data model, a graph data model, but it doesn't mean the same thing as our data models. So throughout this, I'm going to try to make that distinction. But I want you to know why these, a variety of database models are important, what the different, at a high level, some examples of the database or sometimes data store types that are out there, how to think in multi-modeling. And most importantly, since this is a survival course, how to future-proof your data architect career and how to learn more. So from one of my previous presentations, I defined what a good data architect was. And for me, the key word is architect. And I don't make a huge distinction between data modeler and data architect. I tend to use the term data architect, even though other people in the industry use it to mean something more physical, like a storage architect. But what I mean is, not just someone who draws boxes and lines all day, but someone who makes decisions about how to do that and understanding what the business requirements and models are, who think about data protection, not just security but privacy and the other business needs that are more than about the structure of the data, that they also do design models and that they find the right models for the business needs. So for me, that's the definition of a data architect. And now about the poly part. So polyglot traditionally means speaks in many languages with poly being many and glot being something to do with speaking. And which I think is odd that the database world thinks in polyglot. And I think in terms of poly schematic, many schemas or schemata, if you wanna be very specific about how you pluralize words. But what do I mean by surviving? By surviving, I think a traditional data architect, something I spent my whole career doing, means that you can continue to be involved in all the data related architectural decisions because with the advent of these new database models, we're being excluded because we're perceived as being relational data architects only. And we're gonna talk about why that happens. And as always, I want all of our data models, whether it's an ERD, whether it's a graph model, whether it's a JSON document or an expression or design of a JSON document. I want them to continue to be wanted and appreciated. And of course, for that to happen, we have to want them and appreciate them even outside the relational world. I want data model-driven development to still be a thing, even with all these other database features and data models. I want us to be valued for that. And I do have a presentation on data diversity about how to be more valued as a data architect. I want us to be perceived as being team players, not just, I want us to be team data, not just team relational. And I want us to be ahead of the curve on database features, especially when it comes to persisting or expressing data. So what's new in the database world? Well, a whole bunch of stuff is new in the database world. But for me, the big distinction is just hybrid, hybrid, hybrid. What I mean by hybrid is that purely relational, and one of the things I wanna point out in the whole SQL, no-SQL world, the no-SQL people came up with that originally. I mean, no-SQL, no-relational databases. And they use the word SQL or SQL to mean non-relational. So I'll probably continue that through, but I don't mean structured query language. I just mean non-relational or extra-relational features. But the concept of a purely relational database really doesn't exist anymore. Most of the major database vendors have column store features. They support XML data types. They support JSON data types. They support other kinds of no-SQL non-relational features right inside their relational database. And that's been going on for a long time, but as we're gonna talk in a couple of minutes, it's becoming even more polyglot and poly schematic. The other thing we've known all along, applications make use of multiple database and data store technologies. I mean, we've all worked on projects that use SQL Server and DB2 or SQL Server and some analytics engine or something like that. So we have hybrid applications, hybrid data technologies and an application. And then the new concept that no-SQL brought to us is that schemas, and by schemas, I mean data structures, how data is persisted, are now being expressed in a variety of places. So for instance, in a relational database, the schema for the data, there's one schema for each data fact, a column appears in a table. And what that means is that there are some database and data store technologies where the same data might just be a common delimited data or text data. And you can apply multi-schemas on top of that, sort of the way in the relational world we have views on top of tables. So one of the biggest things that changed in the database world in the last 10 or 15 years is this concept of schemas don't just apply on the persistence layer. So I'm gonna go through very quickly some examples of no-SQL and SQL databases. And if you really wanna know more, I know Dan McCreary has some good stuff on this and I know I have other recordings that go into more detail of the use cases of why you might use something other than a relational database. But one of the underlying concepts that when we talk about the theory of relational non-relational databases is that on the right we have ACID. And ACID is the database needs to store data atomically. The data should only be stored if it's consistent. That transactions should be isolated and the data should be durable. So that's the ACID property. In the no-SQL world, they came up with base. So data should be basically available. We should consider it soft state and we should be able to tune the consistency of data. And for the longest time, these were sort of competing theories of database design and implementation. And but now we get to this hybrid world and what it means is we get to treat the data as the use case and the workload prescribes to us. So it's no longer really an ACID versus base. It's a we need to treat the data with an ACID like constraints. And when we can treat it with basically available soft state and eventually consistent. And before, in order to do these two separate things, we often had to choose two different database technologies. And that's one of the things that is changing. But now we have polyglot persistence, which means we get to choose an approach to this whether we want to optimize for workload, for availability, for data consistency. And we've already been doing that. The first place we did this choice is when we went from transactional design to data warehouse design. So from ERDs to dimensional modeling. And if you think about it, a data warehouse is optimized for read and transactional design is optimized for write. And we've done this before. Many of times I've created what is in the data in the data modeling world had called entity attribute value design. It's one of those like product category tables that you create where you have a category ID and a category type and then the value for it maybe hanging off a product table. And we did that because we needed to be able to persist data about a variety of characteristics that were constantly changing, that had constantly changing constraints, units of measure. And we did that. We took this little corner of our relational world and it's not really true, but I like to think of it as created an even more normalized form. But what we really created was an abstraction. And there's a great parallel between these entity attribute value tables that we created and key value databases. We've all dealt with XML data types and XML documents that are kept inside the database, sometimes for really good reasons, sometimes just because XML is cool. And then the whole difference between how we design optimized for read data warehousing and optimized for write transactional processing. But if we look at the no SQL, whether it means not only SQL or no SQL at all, and remember SQL here means relational, we basically have these key, what used to be database types and are now just feature types. So we had relational and key value and columnar and column family and document databases and then Hadoop, which is a whole other thing and graph databases. But now we're going to deal about and I'm gonna talk about that we're going to have hybrid versions of these. We already have these, but how it's becoming more and more of a thing that data architects are gonna have to deal with. So I'm just gonna say there are relational databases. They involve tables with relational constraints between each other. And I'll just call that the classic traditional, what every other data modeling class you've talked about. But then we have key value and some examples of key value, Cassandra's one, Redis, Oracle has a completely separate DBMS called NoSQL database that's based on key value and Microsoft has just introduced something called Cosmos DB. And Cosmos DB is a database as a service in the cloud and it supports key value data structures and data queries. Then we had document DB and the most popular document DB database is probably MongoDB or any JSON or BSON based data structures. And these are called documents because collections of data and collections of documents is how we persisted the data. And it basically was like that. Text there and that example above the JSON words. And then sometimes we just had JSON and documents hanging around and with collections of them. You also notice that Cosmos DB is here. So Cosmos DB, which was originally called document DB in the Microsoft world and document DB became Cosmos DB. Then there's column family. So H base is a type of column family and Cosmos DB supports column family data structures and Cassandra. So Cassandra's shown up a couple of times. So Cassandra is both a key value and column family. So and we have Cosmos there. So you can kind of see that we already have hybrid database structures. And then there's columnar and columnar which is not the same as column family. There's HP Vertica and then SAP. It used to be side base, side base IQ. These are examples of columnar databases and columnar databases are really popular with read optimized selections and large data that can be highly compressed. And I know that there are other recordings I've done about columnar database design. And then there's graph. So graph databases, the most common one is Neo4j by Neo Technologies. And it is a graph based database that supports graph processing. And then there's DataStacks Enterprise Graph and SQL Server as well as Cosmos DB. All support graph structures and graph processing. And graph's an interesting one because there's a couple of common ways to implement it. There's sort of a vector and edge approach or nodes and edges approach. And then there's the triple approach. So if you've attended any of the sessions or seen any of the recordings about semantic technologies with triple stores, they also do graph processing. So one of the things you might have noticed there is I listed that SQL Server was a graph database. And that's because in SQL Server 2017, which is about to go to general availability, they've added in this version of SQL Server a way to do graph database persistence and processing right inside SQL Server in the same engine. And that's a big change. Most of the DBMSs have supported column stores which are basically ways of making bigger data and data warehouse like queries go a lot faster. But this is the first time I've seen graph database processing inside the same engine because before I said we have all these other things but this is a new one. I consider this a bigger change to how relational vendors are looking at other data stores. And it's because if we look at the other ways that relational databases have supported either column store or XML or JSON, it was really just sort of a feature of a column or set of columns instead of a completely separate persistent structure inside the database. But all of this means with all of this hybrid approaches, all of these hybrid approaches that the SQL versus no SQL isn't a thing any longer. So we've already seen the no SQL world and Hadoop add relational like query structures or relational like layers into their non-relational designs and now we're starting to see the relational vendors add no SQL features in. Because Microsoft and other vendors have added non-relational features, I expect this to spread because this is how most features make it into vendor products is competitive advantage. I think if you're a data architect who specializes in relational only modeling and relational only design, you're gonna be considered overly specialized. I mean, this would be the same today. This would be the same thing as saying a few years ago that you won't work with XML data types, that you won't implement column store indexes that you won't implement anything but a purely normalized purely Ted pod approved relational approach and I just don't think that's gonna work any longer. So we don't have the choice like we did five years ago or so when I first started speaking about no SQL technologies to just sort of ignore this because now as soon as the database tool data modeling tool vendors get involved, you're going to have to be designing for these things and you can't just say, oh, we can't use graph in our implementation because I don't know what it is or we can't use the graph nodes in SQL server because my data modeling tool doesn't support it. We're going to have to be asking our vendors to support these things as well as learning the best use cases for them because like I'm excited about these new things, I don't know if you can tell I'm excited about them but the important part is they solve some problems but I don't want to also be one of those people that says, hey, there's this new feature let's put everything into graph or hey, there's this new thing, there's key value let's move everything to key value because that's a black and white either or thinking and we need to have this hybrid thinking for the best fit for the data because we love our data. I talked about how relational databases are adding these and you've probably done these before like I mentioned an entity attribute value table which is like a key value table you may have implemented something that's more column oriented in a row based structure. You're going to, maybe you've created your own graph inside a relational database because relational databases are not very good at doing graph stuff. So a typical graph structure and query might be a hierarchy and we all know how implementing a multi recursive relationship on a relational table is both hard to program to and very performance constrained. So you might have already done these things but now they're going to be native features in your tools but hybrid is the future. So look at these, Cassandra is column family plus key value SQL server is relational plus graph plus column store almost all the other relational vendors have relational plus column store and then we have Cosmos DB with graph and column family and key value and document all in the same engine like truly hybrid not just implementing Cassandra plus Neo4j plus SQL server or SQL no SQL build a solution. This is what's changed and I tend to get overly excited about this because while I've been doing this relational stuff for 30 years there have been these great changes but this to me is really exciting because now we get to love our data in a way that best meets its needs and we get to stop being in those endless debates about whether relational or non-relational is better or whether data warehouse or transactional is the place to be. I know you guys have these arguments about this all the time usually over the family dinner table I'm sure but I wanted to point out this Cosmos DB which is literally brand new and like I said it's database as a service it supports for now they've picked graph key value, column family and document they've built this engine from the ground up to support a variety of multi-models. The other thing that's different than traditional databases that was built in to almost all the other no SQL solutions and one of the reasons why team no SQL people talk about relational databases in a very negative way is this is designed for globally distributed data and to be highly scalable and by scalable I mean be able to add 100 compute nodes to this because it's Thanksgiving and you run a recipe site or because it's Christmas and you're a retailer or you're an online retailer and it's Christmas and it also features tunable consistency like many of the other no SQL things. One of the interesting things though is I'm not aware of any other product that is going to support so many multi-models in the same implementation. I think we'll see other competing services like this or products and this is just pointing to why data architects are going to have to be polyglots when it comes to data modeling and design. I saw that question, the SQL server graph implementation is nodes and edges, but what does this mean for us? I've heard these things from what I call a team no SQL when it was a yes, no, us versus thing then. We don't need a data model, we're schemalists. We can't use your data models because they're relational and my brain won't work to talk about a fact being mapped to all the places it's implemented someplace else. SQL isn't flexible, SQL databases don't scale even though they do a bit and my favorite SQL databases die after about one gigabyte in size which is totally not true and SQL databases use old technology, well they do. They use established and experienced and matured technology which is why someone can be a relational data modeler for 30 years like me. But I also hear from the relational team eventual consistency, what the heck is that all about? Our data wants to be consistent, we would never write half a record, we'd never write half a row, we'd never write half of an invoice. Well, yes we would just not in a transactional design or at least I hope not in a transactional design. And then what do you mean that data quality just doesn't matter? Well guess what, if you're streaming in sensor data you can have some incorrect data, you wanna capture that though because even the measurement that's incorrect is a fact that you wanna know. And the biggest one is my data modeling tools or my database design tools won't work with these databases. That's gonna be our biggest struggle and this is not a slam on the modeling tool vendors. This is a recognition that the database world has just exploded with versions and engines and support. Thousands of these things, whereas back 20 or 30 years ago when a lot of us started there were three, four, five, maybe 10 if you really pushed it. And it's gonna be harder and harder for database vendors, for a data modeling tool and database design tool vendors to keep up with all this. So we're going to have to both work with tool vendors to make sure they understand why you need to now support Graph and SQL Server 2017 and will also probably as professionals have to come up with ways of helping model and design and capture requirements for these non-relational structures. My favorite incorrect fact from team relational is that this is all a fad, it's all gonna go away. And the example I use is for years I had to fight the battle of object data modeling. All the relational databases will be gone away, will be done away with and will only have object databases. And that really didn't pan out. But we also heard it about relational. Like I was around at the advent of relational databases and I heard, you know, there'll be no more IMS. And guess what? There's still IMS out there. There's still pre-relational databases out there. And that's an interesting thing to have to deal with. And some of these no SQL databases are kind of pre-relational just born in a new world. And then the hardest thing about being data architect in these non-relational structures is having to let go of what we've been told is the most important thing in our role on team data, which is data quality, integrity are number one. They are in transactional design, not so much in these other things because they're used for analytics or reporting. I mean, in some analytics and machine learning things, you can get rid of what we would call a row of data. You've just tossed it out because it's incomplete or you can't process it or it doesn't meet your expectations. We would hopefully never do that in a transactional world. We think about why we do data modeling. We wanna know about the data, the metadata, the mapping, the data lineage, where it came from, where it's going and to understand knowledge about the business and knowledge about the data. We still need to do all of these things to survive as a data architect in the no SQL world. So we came up in the traditional way that we would create a conceptual model, then a logical model and then a physical model and then generate all these relational structures. And I still believe in this. And I believe in this for both transactional and read optimized designs. But if our tools aren't going to be able to catch up, we still as an architectural engineering professional need to come up with ways of doing this because we have a variety of schemas and schema types. We have schema list data where the schemas are in the code or in a table or someplace else. We have schemas that are applied on read, not on write, and then we now have these poly schematic or polyglot hybrid solutions. In a relational database world, we see a schema as a physical structure, but in the non-relational world, the schemas can be separate from the data like in an XML design in an XML schema document. It can be a schema layer on top of the physical structure like Hive on top of Hadoop. It can be in the application code, which is, you know, we were all taught that's wrong. We're not allowed to do it. But in the non-SQL world, this is a common way of doing it. Or it can be embedded inside the data itself like a JSON document, like a key value, like anything else. The schema can be kept alongside the data itself. And that's hard for me to get my head around, but that's how these other technologies work. So the new process that we need to survive is that we need to understand the underlying architecture of the physical tools that are gonna be used to persist and process the data. And of course, we still need to understand the data, but we need to be able to apply what we understand at the logical data model level to these physical structures, even if our tools haven't caught up with it yet. So we might use tools. We might have to use a whiteboard. We might have to do this using code. And not for a lot of data architects is a difficult thing to get our head around. So the good news is, most data architects we're really experienced. Some of us can just retire out of this problem. And some of us will have to learn this new process because one of the nifty things about most a lot of nifty products is they're mostly command line. And I feel like I'm working on the mainframe again when I do this. So traditionally, even though we wanted to be involved in projects much earlier in the cycle, traditionally we've been brought in when they're ready to start thinking about developing and designing software. And in the modern world though, because people are gonna choose these solutions, but the way you choose a database solution is based on what your data needs are, your data requirements and your data workloads. So we need to be involved much earlier in the process. And you can quote me on this when teams say, we only want you when we're doing a relational design, that you need to be there because you understand the data. You understand what the volume is and how much it varies and how sparse it is. You need to be part of those discussions. I said that these things are about scale. We can scale up a SQL server. We can scale up an Oracle server. But when I talk about scale in the new technologies, if you literally want to say, hey, we've got five servers here because it's gonna get busy next week, we wanna use 100. And there's no easy way of just doing that in most traditional relational databases. So we need to be able to help teams understand what the workload's gonna be, how scalable it needs to be, how parallel it has to be. And we would optimize this for the reads because there's a trade-off with scaling and having multiple copies of your data and how fast you can get that data back and how fast you can update it. So most NoSQL was developed with scale in mind with tunable consistency. I need highly consistible. I need basically consistible. We scale out by adding nodes, not by adding more RAM and CPUs, which is scale up. We distribute data because we want it to be highly available. So if a node falls over, the data is still on another node and we don't have to take the system down to recover from that. But we need to understand more than we traditionally have a data architect of the workloads and workload trends that are coming about our data. And we often don't collect the metadata, these requirements in our traditional models and yet we need to find a way of doing that. So where data models can still help, even our lowly little relational databases is that if we can reverse engineer something and the more SQL-like layers that have been added to a NoSQL project, the better. We might be normalizing some of the data like reference data or codes. We still wanna define these data types and data facts. We wanna know what the exceptions are. We wanna know what the expected values are even if we're not going to enforce it. And all the other normal metadata is this highly sensitive data. Does it need to be masked? Does it need to be encrypted? But mostly we need to think differently with our data models, going from thinking about them being a prescription for a structure that's implemented to describing just our data. So very logical or conceptual data models are helpful. The tools have to catch up. There are new separate tools being developed and as a data architect, I mean, that's nice, but I'd rather be using one tool to do all these things. And I understand how hard that's gonna be, that our data models can't be prescriptions all the time, that they need to be... We need to measure against our requirements, not necessarily enforce them all the time. The best way to deal with all these things is helping teams develop naming standards because a lot of the NoSQL types allow you to call the same thing, many different things and it doesn't care yet when you try to integrate or use the data for another use, you do care. Data types aren't overly prescriptive, they're more like XML ones with like character, decimal, things like that. And then helping teams decide how to tune data quality, which means we need to let go of what we think. And then we have these big questions, are ERDs and IDF1X, which most modeling tools you, is this gonna be enough? Should we extend them to do graph and key value and column family? Should we create new notations for each of these? Should we just scrap the tools and the ERD approach and start over again, which I think is not the valid option? These are discussions we need to be having as a profession, as well as with tool vendors. And then what about all the very sexy data modelers, all of us? Well, like I said, we can stay traditional ERD data modelers and be very valued and contribute to a lot of projects, or we can think to the future and realize that commercial generally accepted production databases are going more hybrid and that we need to understand other structures. So as I say in almost all my presentations, every design decision should come down to cost, benefit, and risk. And these are all the trade-offs, whether you're choosing a data architecture, doing a design, choosing how to model it, those are what we are talking about. So in one of my other presentations, I talked about what a great enterprise data modeler or data architect, what their characteristics were. And if you think about everything I've talked about today, can you be unattached to your relational models? Can you still have a get-or-done attitude? Can you have project empathy for the teams who want you to do a design that your data modeling notation doesn't currently support? This will involve a lot of good architectural skills because these apply to all architects and engineers, not just data architects. So if we can summarize here, the more SQL-like features that are added to your NoSQL tools, the more likely your data modeling tool is to support it. So I know I've reverse engineered some things even before the tool vendors said they supported it. Data modeling tool vendors will support features because they win deals. And this is just all vendors. This is how business works. This is not a bad thing. But if they know that you're using SQL Server 17 and you wanna be designing the graph nodes, you're gonna need your tool to do that. And the serious NoSQL vendors or projects, since a lot of them are open source, understand that hybrid is the enterprise data story. Sure, there's lots of organizations and applications where they can be all Cassandra or all Hadoop or all of one type of technology. But the big ones understand that enterprise data story is important and that we have a lot of processes and techniques that were focused on relational, but we need non-relational things too. And they understand that our data models still have value even though they're expressed in a relational way. So I want you to learn about these methods, don't avoid them, go get hands-on training with these physical database things. And the great news is with the cloud and how things have changed lately, it's easier to get hands-on training even without going to a training class. We need to learn the lingo and use it. We need to describe data modeling and data governance that comes with it in the context of these technologies and their use cases. We should still bring our data models even if they're relational. We need to get ahead of the curve on the new non-relational features in your current databases and understand the use cases. And like I said, there's lots of good presentations out there spoken by me on what the use cases are. And then we need to be able to let go of our, and think differently about data and data design, especially when it comes to consistency and constraints and even data quality because there's a good business case for that. And we should enjoy the new database now. I'm so excited about new databases as a service and the new features coming in relational databases. So what you should do, learn, get hands-on, talk to your tool vendors, bring the data models you have, and we should be getting together to try to figure out what notations we need for these other structures. So some quick resources. Dan and Anne have this great book for making sense of these things. There's also a free graph database book that you can download the e-book. It's from O'Reilly. It's written by the people at Neo4j, but it's a great foundation on graph databases and graph processing. Steve Hoberman has his book on data modeling for MongoDB, which is the document database. So that's a good resource. And then there's a great book that's coming a little bit dated now, but it also introduces you to seven types of seven database products, which are really an exposure to all of these NoSQL types. So I have, I've come just about on time, so I'd like to know if you have questions and let's see. Hey Karen, yes, we are right on time. You're perfect. So, you got a minute, so just a reminder, I will be sending a follow-up email to sending a follow-up email to everybody on Monday with a unique login to access the slides and the recordings from these sessions. And if you have questions, feel free to put them in the Q&A section. So, you know, Karen, always a part in topics, so we had Donna talking about it as well, but how do you store metadata for such diverse set of data structures and architectures? Well, I think that the metadata thing, like we could, like, of course, my answer is I want all my metadata to be in my data modeling and database design tool. That's where I want it to be. All those things go across designs. I mean, we've always had repositories. In my ideal world, I still want that all to go there. I've been a strong advocate of the fact that I don't think the fact that we describe data and data requirements in a relational-like mode, like ERDs, I don't think that's a failure. Like, a customer or family name, if we can describe it, what our requirements are about it, and maybe we have a requirement for how we treat it in SQL Server versus Oracle versus Hadoop. I mean, I don't have a problem with that. I know a lot of people do because they think because we describe the data in a relational way, we can only design it in a relational way. I want to do it all in the tool. I know there are metadata tools that can support all this. I don't think that has to change just because of these non-relational worlds. The type of metadata we might collect might be different, especially because a lot of our metadata is about the physical implementation, and I talked about how schemas and constraints and guidelines and expected values go in different places in all these other tools. I see there's also a comment that basically backing it up, that the ER model is still good for visual and documenting help with business users. I agree with that, like that's just what I just said. I've always wondered in the back of my mind whether or not the real answer for all this, for us to store and think about the data, would be a graph database, because an awful lot of our modeling thoughts, things are reused, domains, a data modeling tool domain, an attachment or a user-defined property gets used many times. I mean, that's kind of a graph story. So I wonder if in the long, long run, if maybe our metadata and our models are really a graph, and in fact, most of the tools that we use don't use a relational database to hold all the data modeling stuff. They use, you know, other more embedded data storage. So maybe in the really long run, long after I retire, that we'll be storing this all on a graph database and using graph notation. Sounds good. You know, we're using, we've got a bunch of questions about tools. I don't want to get too much into tools, and there's a lot of recommendations and a lot of tools out there. Back in the day, you know, 30-plus years ago, we did an ERD, and then from that, we would model relational, hierarchical, and network database treasures. That was a fear point there, yeah. Yeah. And I'm just trying to go through the questions. I don't think I'm seeing all of them. Let me make this bigger. I can't. That's all right. Yeah. Yeah. What else is going on in there? Can you read them? I'm having, I'm having mousing problems here. Oh, no worries at all. The other questions, you know, are, you know, really related to, again, just specifically to specific tools and recommendations there. You know, there's a comment here that, you know, right internally, they use reference tables that store object plus relational type plus object. Yeah. So, I think, I have this on my slide, but I didn't want to put it in writing, I think a lot of the modeling we're going to be doing is going to be maybe in spreadsheets, just like we did years and years ago with XML before the modeling tool started sporting some XML like modeling or document like modeling. I mean, we're really at a point where there's a lot of catch up that needs to be done and it's going to be hard to do, so it's probably not going to get solved overnight. You know, I'm not really sure how to do all this, but I think it'll be a combination of doing workarounds as well as using databases to store our data model, which is just perfect. Indeed. So, Karen, that brings us to the end of our session. Thank you so much for this great presentation and thanks for responses and thanks to all of our attendees who have joined us so far. We now have a 30-minute break in the schedule where we encourage you to network with each other, stretch your legs, and get ready for our upcoming keynote presentations of the day. The keynote will begin at 2.30 p.m. Eastern, 11.30 p.m. Pacific, where we will hear about collecting a data platform in 2017. Karen, thank you so much for joining us. I love the new characters that you've got going on and all the art in there. That's just... That's awesome. Thank you. You got a great start. Thanks, everybody. Thanks, all, and we'll see you in, actually, 45 minutes.