 Hey everybody, we are back with that netconf. That's right. They fixed my volume. Maybe I know now I'm sitting next to you and it's just like I'm trying to be super quiet But I don't know how to do it. You gotta use that MPR voice man. Hello and welcome to another episode of The meeting sounds of African swallow Are you okay there Jeremy? Sorry, that's what they talk about an MPR, right? Yeah, we have all joking aside. We have our next speaker Jerry Miller talking about Martin and Document TV. How's it going Jeremy? That's going well, man. It's good to talk to you. I know it's been a while. So what's up? Let's let's see your code Let's see your demos We ready to get started ready to get started. All right, somebody's at the end of a long day Go ahead and kick it off man. You ready buddy Is my side deck showing? Yeah, it looks good. We see everything man. Oh, okay All right good. Well, I won't share this I won't share this at all but if we ever see each other and in person I had a Really terrible incident trying to use the European African swallows example from Monty Python in a talk one day But you can't just drop that man leave it there No, because it's a little too profane, okay, okay, okay, so Today we're gonna be talking about a project called Martin Then that's spelled with an E because what it really refers to and here I didn't think to pull a picture of it But everybody we got to see how cute these guys are first. So Martin's actually not named after a person is Named after these cute little boogers. I'm about to show on the screen in just a second Yeah, these guys see how cute that guy is So that that's what Martin's actually named after not not a person So Martin is a client library It's a dot-net distributed as a dot-net Nougat that allows you to treat pro-scrats QL Database as both a document database in an event store inside of a dot-net application Now well next we'll go into why in the world did we think this is a good idea? And why might you be interested in trying to get some of this? It's a little bit of a funky idea Let's get into it So first off at the end of this if you're still not scared away We've got quite a bit of that community around this. We're up to almost 80 80 contributors so far Got a lot of documentation and samples up on our website We always find something that that that's missing, but we've tried the code itself is up on GitHub It's under an MIT license so you can pull it down copy it make fun of it Take whatever you want out of it. And then finally, I'm a huge fan of Gitter for having an embedded chat room for Attached real closely to your GitHub repository and that's the best place to go ask questions about Martin All right, so just a little bit of a history so about the 2014 2015 timeframe at a former at a former employer my colleague Corey Taylor and I were kind of talking and he had the idea that Hey, we used Raven DB very heavily and we love the development experience but we were having some trouble with it in in production scenarios and It was getting the point where we knew we had to replace it one way or another but Raven DB's usage is very different than what you would get from many other persistence frameworks or databases So we kicked around the idea Corey actually had the idea. I'll give him credit. What if we could take Postgres QL? Hey, it's got some pretty cool JSON support What if we could turn that and make it act just like Raven DB and we can swap it in and out So from that conversation and a whole lot of time later Martin itself was born As kind of an add-on that that's actually turned into maybe the most popular feature Martin We also looked at adding event sourcing support In the library with an eye towards replacing very very old versions of an event store in our shop So we got to roll out it finally went 1.0 in September of 2016 It had actually been in production in a very large production app quite a few months before that 2.0 came out last summer With a lot of improvements in the internals a lot of work to reduce memory Allocations and generally make everything as efficient as we can possibly make it there is a 3.0 That's in progress, but it's not progressing super fast and I'm not going to demo any new features here today But it's coming So I want to talk about some of the very different ways you can do application persistence To kind of look at where Martin might fit in So most of my career we've used just old-fashioned relational databases Maybe we put other stuff on top of it. Maybe we put on or m's Maybe we put other tools like this Martin thing, but tables rows Foreign keys primary keys views stored procedures got help us the kind of stuff. We've always had now Some of those tools. I don't necessarily like it for a lot of transactional applications but sequels awfully hard to beat for reporting and there's a Whole lot of investment that we probably all have reporting tools and learning sequel and All the third-party stuff that already supports database blinding. So there's plenty of good reasons to keep relational databases around What we'll mostly be talking about in this talk is using a document database huge advantages for developer productivity in my opinion Other cases you might use event sourcing Where I don't store the exact state of the current system my system of record I may be storing events as they happen. It's like an invoicing system an invoice was created and then voice was approved So on and so forth and from that raw event data that may be valuable in itself Off to the side. We calculated and create a read side current state of the world Addressing the relational models various kinds of ORMs. Maybe we use entity framework. Maybe we use something lighter like dapper Now a cool thing about Martin here is That it allows you to mix and match Any of these tools Martin itself supports the document database and event sourcing But at the same time because it's sitting on top of relational database You can happily use anything that Postgres offers. You could happily use an ORM With it and we have specific integration with dapper Not not that that really means much. We just expose the connections. You can use a dapper with it to your heart's content So you're not stuck with just Martin. You're able to choose whatever Persistence pattern makes sense for a particular feature within the same application And that may be very advantageous to be able to switch like that even inside of one single system Okay So if you're not familiar with any kind of no sequel system or specifically a document database Why you would care about it? What it does for you? If you think about using an ORM today, that's probably the most common pattern You're spending a little bit and maybe a lot of time doing mapping Where does an object model property land in a database field? Hopefully the database and the object model looks like a lot of times They really shouldn't and it makes it a little harder for you to try to do those mappings Maybe you have object hierarchies that don't fit the database. Well, maybe you have deep complex objects Where you have to start to span a lot of different tables to make it work in a relational model Document database is really nice, especially when you get into hierarchical models You're not mapping the structure what we're doing and especially with Martin. You're just Serilizing your objects to JSON and stuffing it into a single column in the database All right, your schema is your objects So some of the great things that does for you that the advantages that gives us as developers It's a whole lot less mechanical work to make things happen Not spending any time with JSON I'm not creating two completely separate models a storage model and a model for my business logic I just worry about what the document structures like in C sharp Make sure that the inevitable ubiquitous newtonsoft.json can serialize it and I'm good to go This is absolutely perfect inside of an agile development process As much as possible to make agile go it needs to be really easy to make design changes You pay a penalty anytime. There's extra cost for changing code so Taking the example of adding a field and going all the way from adding it to your object Adding a field to the database table adding a database migration so on and so forth a Document database at development time as we'll see with Martin you wouldn't have to do a thing Add a new property to your document and you are often running. There's no extra work to touch your database We get much less friction for making additional changes to to our objects It allows us to play with design ideas because we can adapt to the database very quickly There are some database migrations. We are still running in postgres sequel We do still have to manage the database schema But it's just that the database scheme is going to change much much less often This last bullet point. This is actually a big deal. So The trend I've seen the last several years Software development in the way we approach automated testing it shifted much more towards trying To incorporate much more integration testing at least intermediate level testing and it means going all the way through the database as opposed to Trying so so so hard to isolate things and making code more complex just to be able to get unit tests without the database So if you take that idea that integration testing is a good thing We always have to make sure that we have a either a clean or a known state in our database before we test through it The document database actually makes us a lot easier Especially it's something like Martin or what we did previously with Raven You can effectively do a complete wipe of the database with a single command in Martin Before any kind of automated tests makes it very easy to effectively provision a brand new database schema per test Right if you've done any work with relational databases inside of automated tests that can be a lot more complicated Relational integrity making sure you delete things tables in the right way. It's just a lot of extra work. That's potentially pretty slow right, so why postgres SQL and Let me get this one out of the way fast because every time I've given this talk somebody asks the logical question is Why did you not use SQL server? This is a dot-net tool most dot-net chops want to use SQL server. They're more familiar with it and We'll be honest. We know that Martin would probably be have much more adoption if we've been able to use SQL server. So our belief is That Martin is using a lot of features that are very specific to postgres SQL around its JSON support It's embedded JavaScript support a lot of JSON operators that are unique to Postgres QL itself We think long term That not the current version of SQL server but the next version of postgres version 11 and Hopefully whatever the next version of SQL server that SQL server will reach enough parity and Both of them will support a SQL slash JSON spec that kind of standardizes how you address a JSON Data inside the database with JSON path But that would make it a lot more possible a lot easier for us to finally make Martin database agnostic but For right now postgres pretty awesome. It's super easy to install. It's really lightweight All the examples we're going to show today are actually running in Docker Because it's super lightweight. It's really easy to bring down a post a Docker image and get going It's been around a long time. It's over 20 years old It has a very active community and they're innovating very hard and very fast But specifically for Martin it has a lot of really unusual JSON features The the JSON itself is stored in a special column type called JSON B That is a binary representation of the JSON document which enables postgres to Reach in to the JSON Documents much more easily and much more efficient and part of My original our original goals building building Martin was we also needed to get on to something With that kind of usability we need to have grown-up DevOps tools We have good monitoring tools backup tools, you know All those kinds of things you have to have in a grown-up shop and by being right on top of postgres ql We get to have everything the postgres ecosystem has All right, so We catch my breath for a second and we're gonna actually look at code finally so getting started with Martin now Let's assume we're building some kind of order management system and a really simple one at that so Let's say we have a very simple C sharp class like this order class. It's on the screen Order class itself may have one or many many order details and you see that I'm using a custom enumeration for the priority As quickly as possible we want to get this spun up where we can persist and load these these little order classes with Martin So first thing I want to do need to create a document store for Martin and in this case We're not customizing anything. We're not Configuring anything we can go with all the default default options So the only thing I need to do is just say I want a document store for this connection string And right now this is pointing to an instance of postgres They we check this You'll see over in the right. Oh Sorry, let me go ahead and clean this off You go wipe this out. So I said very much earlier We have just a little handy helper that will allow you to quickly clean off your database schema and start over And that's what I did did here with the code that's highlighted Looking in the right of of our IDE now, there's no tables at all in our public schema Just out of curiosity for anybody who's wondering if you've never seen this before I'm using JetBrains writer for all the demos today Just giving JetBrains a big shout out they have been very generous they give I have an OSS license from them and Martin's pretty well supported and built now at least from my side with JetBrains writer So we have no database, but we have an order class. We'd really like to get it persisted So we have a store So that represents that is going to be a singleton object within your system I don't there's not a direct analog inside of any framework, but if you're familiar with in hibernate, that's I'd say that's roughly analogous to the iSession factory from in hibernate so but to get to Martin's version of a DB context we need to get at an iDocuments session That's what's happening in this method right here. So the document session represents a unit of work To Martin so it allows you to establish transactional boundaries And it exposes everything you would possibly need to query a Martin data store in this particular case, we're going to create an order object and We'll notice here. I'll come back to this at no point do we actually Set a value for the id here as you can probably guess the id property That's going to end up being the primary key in the database But getting back to it so we're going to create a session. We're going to tell the session that hey, I want to store this order document Save changes here. This is committing any of the Q depth changes to the database whether it's Updates inserts the store here is a generic absurd meaning that it updates the document if it exists or it creates a brand new one And there's several other kind of operations But it flushes all the changes in as much as possible in one command to the database Below that we'll go fetch a completely separate session Just to make sure we're not getting any kind of cash copy from this one And we'll try to load the order back from the id that's been magically assigned and just prove that it's all there So I'm going to run the test here Okay We succeeded down here so a couple things Martin just just by convention if it sees a property called id it assumes that's that's the identity for your object There's some ways to override that but Going with all the defaults are just working with it any medically the way it wants to it knows that id is is our primary key And because it's a grid if You try to store it with an empty grid with it'll just quietly Make a new grid for you and assign it to the document itself before it persists it For those of you are a little more Who are a little more familiar with it? It is a sequential grid so that it Saves and loads quite a bit more efficiently to the database And right off the bat we've been able to load that that order by its ID We've done no mapping we've done no configuration Just to prove that stuff actually exists. Let's take a peek in the database So on the fly we created an order object or an order table that stores it There's not a lot of stuff going on most of your document tables are going to look exactly like It's going to have an id column of some kind It's going to have a data column that actually has the raw data and a little bit of metadata The kind of things you would expect last modified other stuff like that. Let's take a peek Just a second Just to show you what the data looks like and the joys of coding live now. So There's a JSON body So not super exciting, but that's a good thing. We didn't have to have any drama We didn't have to do a lot of configuration. We added the Martin Martin Nugget reference and We created a document store for the connection string because I'm using all the defaults is running in a development mode Meaning that it will create any necessary Tables it needs to on the fly if they don't already exist in the in the database So this is our getting started story trying to make it as easy as possible for new people to pop up and start doing things Show off the advantage of where a document database excels Let's change our order object and See what we have to do so now Let's say the order has some kind of address Maybe it's a billing address. Who knows what it is So have just another little type here And you'll notice there is no ID on this We don't have to the address when it's persisted as a property of The order it's gonna be persisted directly in the order. I don't have to create other tables I don't have to think about anything else To come back into this test Let's go ahead and add I'm not gonna fill out the whole test, but Just have a little bit data. So hello from Austin and Now let's persist the whole shebang Let's run one quick thing here Just to show what the JavaScript is gonna look like. All right, so we're gonna do the same thing We're to save the address we're gonna save the order except now it has an address property We're gonna load the thing back up and I'm gonna try to spit the JSON out into Into the test output if you look at the bottom right of the screen Okay, we've already come back and look there's our new billing address Again, no mappings. We had to do no mappings. It just worked So let's move on to some of the cooler things with Martin so Most of you if you have no background with no sequel or only a little background with no sequel You're kind of wondering why am I making a big deal here in this slide that? Martin and really just postgres underneath it is completely acid compliant. This is something you've always had in a relational databases This is something you always expect to have you can't believe people try to do without this but a lot of the no sequel databases and probably especially the early no sequel databases and Did not have we're not acid compliance most of them had some kind of some kind of eventual consistency where rights Happened immediately, but you were not it necessarily able to immediately read or query over this the state There'd be a little bit of gap when the reads were not in sync with the state of the database This was a huge deal for us as we moved from My previous shop we moved from Raven DB to Martin. It was something we knew we wanted So just to prove this We find the next database sample now what we're gonna do is have a Dave I have a fat object. We used to test Martin Just called target this has a lot of fake values to have every type we can possibly think of we have glids with longs doubles dates We have children grandchildren just trying to be big we're gonna We're gonna throw a thousand of these into the database as fast as we can So we got here and then as fast as we can we're gonna come back and we're gonna query That we can pick out how many of these are green? I'll just to prove that we can immediately query against the data. We just inserted just run these real quick And we look here super fast here. We get an all-new All-new set database session. So if you look at the bottom left spinning ball, we got the green check mark. It just works So there's nothing you have to do here. This is actually pretty rare for a no-SQL approach This ability to immediately query against the data that was just inserted without any kind of Eventual consistency catching up on the read state. That's a huge deal We get all the goodness of a document database and no sequel kind of approach But we didn't have to lose out on asset compliance that we get from good old-fashioned relational databases Getting into some other ideas. Maybe where where a document database shines over The the ORM approaches you may be used to today if you think about having some kind of object hierarchy And in the case we're going to get into for this example Me To where I want to be So we have our order the way that we've had This isn't a very fancy fancy Example we have an order, but maybe we have specific kinds of orders domestic order International order and maybe they actually have some different properties different properties different values different behavior when you pull them down when we Work against the persistence Sometimes we want to see all the orders together. I want to see all the domestic and heart all the domestic and international orders for a certain part And I want to treat it as just an order, but other times I want to load a specific type of part I want to order order load an international order or domestic order I want to use link queries against either the subtype or the parent type Okay, this is something Martin supports out of the box Not nearly as much effort that you have with an object with an ORM I don't have to think about do I have a separate table per type Do I try to put them all in the same type? Do I have like extension properties in the very sparse table? How do I map that forget all that stuff Martin's going to take care of most of it for us? One thing we do have to do though is we just have to tell Martin For the order class. I want you to also include the Pacific subclasses for domestic order and international order Now coming down into usage It's just showing here I can create a domestic order and put all the properties in I want I can create an international order and We just added Something on the fly Because I think we were making fun of Hottie from JetBrains earlier today on Twitter for something. Let's say it's going to Spain and I can store one I can store two save them both And I come back and I can load them as an order or I can load them specifically as the subclass I Can query any possibly mix match here? Looking at the bottom left just to make sure the test passes and there it does Just a cutesy little thing This last one here So I want to query a domestic order where the customer ID is somebody and All I'm doing is Martin has a little little helper that will allow you to preview what the sequel command is for a link query So I'm using that just to grab the sequel to be generated bottom right just see We're looking over this ugly table name and I am limiting the query to Well, this is Martin ease for I only want domestic orders. That's part of the sequel the sequel where clause all right So That's the basics saving and loading documents Just to json if this is all there was to it You can probably code this up by yourself honestly pretty fast except for the link query part It's bust everybody's bubble the link query part is Super work-intensive really monotonous And if you like the link provider and whatever database tool you use Give some positive thoughts and thank yous to ever wrote that because it's miserably time-consuming But so some of the things we've done to try to make Martin fast on top of that We have computed indexes It's perfectly possible and we were we were tipped off to this by some postgres gurus We can define computed indexes inside of our json document I'll show an example of that in just a second. Oh Now let's look at it now So that's actually what we're doing here in that example where We wanted to throw a whole bunch of records in really really fast and then query against them So the code that's highlighted there, that's actually just telling Martin Hey, I want I want an index against the color property of target If you're familiar with the f core and the way it does mappings or something like auto mapper You're used to using these kind of expressions to as a strong typed way of saying this property this type So on and so forth, and that's what we're doing here There's a little more power here to say do I want this to be unique do I want it to be? Use all the the special features and types of Indexes in postgres, but it's probably what you want to do So when I'm issuing a query now down below where I want to say I want to look for where the killer property is Green postgres is able to apply that index Just like you would with relational database when you set a query on a column that you frequently search over All the same problems as database indexes that always exist You always have to wonder is this this creating more overhead on inserts than it gives us in value on speeding up reads Same same rules apply The bigger one in the speeds some of the other things we do Out of the box our default JSON serializer is new soft json It's it's the obvious choice as the default because it's the most battle-tested. It works for everything Some weird things still leak through especially especially some oddball things like like F sharp types or We've had a couple people wanting to query against nota time Objects that kind of give us some trouble with the json serialization, but it's rock solid, but if you want to We've been able to swap out and use Jill as a faster serializer or some of the newer JSON serializer alternatives, so you can opt into Maybe and get some better performance by going to a different json serializer and still use it with everything that Martin does I Actually had a had a little bit of conversation with with a trainer in in Houston that I Or I had sat in on his VB six classes 20 years ago, and he had this phrase that stuck with me ever since network round trips are evil fastest way to make your your Enterprise system really really slow is to be really chatty making a lot of its second of calls to the database a little bit of information When developing Martin, we were trying to be very cognizant of that We've tried to minimize the number of batch query or the number of round trips the database is possible When you're committing it's a database the document session It's trying to send the commands all in one one network round trip same thing with queries I don't have an example sample in this talk, but you can Set up multiple queries Like if you need to get at one document type and another and grab them all at one time So try to minimize the number of wrecked round trips. Excuse me The feature that I'm most proud of We call it piled queries and it's faster just to show an example on this one so Take a look at this find by color class that I have highlighted here So that same query we've been doing where I would just find all the targets where its color is green And that's what we've we're expressing here by this find by color class, and it looks a little funky You're definitely gonna be depended upon resharper or code rush or something like that to Generate the signatures for you from this this interface But let me try to explain why this thing is so cool. Let's take a look at how it's used So instead of using a blank query directly I'm gonna use the sessions query Query there's an overload that will allow you to pass in one of these compiled queries So I'll create that object pass it in here and it basically does the same thing as if this is the equivalent of And it's just writing out So this is actually the equivalent of that so Why this is so awesome even though it looks really funky The mechanics of of doing queries by link are Super-duper inefficient. Just just thinking about all the things that happen when you issue a link query You're creating a whole bunch of little objects It may not feel like it but you're creating a lot of objects in that little expression where you you do the where clause So when you pass the link query in your link provider has to take all these little objects and This is probably more fun if you could see all the goofy hand motions I'm doing but it's got to walk down this tree and this really excruciatingly low detail Low-level detailed model of what are all the expressions there's an end here and or here what property is It's got a parcel through all that stuff figure out what's going on interpret all that object model then Do a bunch of strength and cat nation come up with a sequel command and then figure out how all the data Combine ends up going to an object model and so on and so forth It's a lot of mechanical work to interpret a link expression and do something useful the compiled query feature in Martin it allows Martin to remember the query plan for a link statement so that the first time in The application when I use this find by query model it's effectively parsing and in interpreting the link expression one time and then it's remembering the sequel statement and how to map from the Parameters on our compiled query into what the raw database Database command is and then also it remembers how to go from the results of the query to returning the object models that you want We found in our testing. It's Consistently seen better than an order of magnitude improvement in querying speed if you use a compiled query versus just using raw raw link So we're really proud of it, and that's why it's been so much time talking about it Just talking about some other things that are in there If this is pretty common feature the includes just the idea of if I have one document And it's related to another document type I can actually force Martin to go fetch them in the same network round trip again network round trips are evil We can opt into postgres support for bulk inserts so you can if you get into a case where you need to Jam in a whole lot of documents all at once for some reason You have this bulk insert command that will opt into postgres is more performant mechanism for doing bulk imports of data And finally and especially if you come from the MongoDB world We have the ability in the basically we call the patching API where instead of trying to download the whole document putting it into a Dot net object changing it and sending it back You could just say I want to change the property within the document in the database to this value Actually takes a lot of advantage of JavaScript functionality a JavaScript function running inside of postgres to make that happen Getting close to the end Just some other special features and this will be this will be much better supported into Martin 3.0 whatever that happens so The documents are stored in JSON in the database In some cases Newton soft might have to put some some ugly dot net Serialization stuff in there for type information, but most of the time that JSON is pretty clean it may be perfectly possible to just take the raw JSON and Go grab it because Martin will let's just get the JSON string and immediately toss it down an HP response for Really efficient HP web services You know some of you are saying oh, but you know you're never supposed to put your your object model across the wire Which is true and awfully a lot of the time so along those lines so rather than Pulling in your object and using something like AutoMapper to translate it to a totally different view model and serialize it down your HP response What you can do instead is use a JavaScript function that takes the raw JSON of your model and Changes it into the representation that you want to go out of The response in your own web web service and you can let to let postgres run the JavaScript in the database So you never really create any any objects or do any kind of serialization in your your web API This for for Martin 3.0, we want to go a little bit farther today You have to to create a JSON string to pull this off Which it's still some object allocations In Martin 3.0, we're looking at a model where you could just stream basically just stream the the bytes the raw bytes from The Postgres world directly into the HP response without even having to allocate a string Looking for just the absolute fastest possible way to have Web services that just deliver data out of the database But that is yet to come So the last feature and it is a big one last feature I want to talk about is just shifting over a little bit and looking at how Martin can be used as Used for event sourcing with its full-blown projection support So like like Let me back up so Hopefully when you look at Martin's event sourcing it looks pretty familiar and one of the reasons it should But be pretty familiar if you've used any other event sourcing tool in the net world is we all pretty well started by reading Greg Young's seminal paper on event sourcing and kind of going from there So in the case of Martin just just like like get event store or in event store or several other tools We have really two concepts We have events and streams and all a stream means is some kind of related group of events It's moving into the example I have today Grew up in that kind of in the middle of nowhere Wasn't a ton of kids to well there really wasn't any kids around to play with so kind of naturally became very bookish and Read way too many epic fantasy novels whether it's Lord of the Rings Wheel of Time Game of Thrones hundreds of others But so let's model that well, let's say we're we're building a system to track the progress and of a Party carrying out some kind of quest in an epic novel. So maybe it's Frodo and Samwise Heading off with all their companions going to throw the ring in the volcano or I'm actually going to use the wheel of time as our heroes leave their little village and go out in the big big world But so the kind of things that happen to a quest party as they go out in the world We have events where members show up and join join the join the party. Maybe at other times Sam and Frodo take the boat and go off on their own and everybody else leaves people leave the party They get to certain destinations Maybe they slay monsters whatever it's going to be so it's been sourcing We don't have a master document of what's the state of the quest party. We're tracking all these events members joined arrived at a location When did the quest start? When is it done? Who left? I don't know what that's there for in real life The canonical example, I think I see more in anything is invoicing orders an invoice Somebody filled out an invoice then maybe you added an item Finally, it's approved the invoice was paid tracking those kinds of events so in our case Let's start out. We're gonna create a new quest and We're gonna record just a couple events at a time. It started on Maybe we'd record what day it is, but we're gonna say The name of our quest started We're gonna add some members so these guys started out and On day one and then also at day five this guy Tom he ran off So I'm gonna create the same kind of session from the store document store document session because all this is exposed there Everything is off property called events Show some of the things it does That was more complicated than I was expecting it to be okay. Never mind. That was scary One of the things I can do is I can say all right. I'm gonna start a brand new stream of events and I'm gonna specify what the stream ID is right here. I Can let Martin do this, but we're gonna specifically say it's his particular quit And then I'm just gonna tag on these events to it save my changes events Part of the same unit of work you can mix and match Persisting events or appending events to a stream with saving documents and that comes into play in a minute when we talk about projections Coming down the line. Let's say we have we pick up a couple new team members on Say day seven we pick up these extra characters and we can append them to the to the same stream of events So we could be tracking multiple quests. We could be tracking the Lord of the Rings. We'd be tracking the Wheel of Time About Gary at whatever whatever epic fantasy book you love loved as a kid But keeping straight what party is doing what at what time so that that's tracking the raw data and I think Most people to get hung up on event sourcing or at least my initial reaction is well persisting the events That sounds pretty easy, you know, especially in our case we have one table and we're persisting an ID maybe a timestamp maybe maybe an order and Json XML some kind of serialization of the events. That's great How do we work with that data? Because sooner or later we need to see what is the current state of the system? Some kind of what the CQRS folks will talk about as the breed side model of the view so event sourcing Maybe directly connected to your event sourcing tool. Maybe it's an add-on. You have an idea of projected views taking the raw event event data and Putting them together and trying to come up with what is the state of the system? Let me just Look at another guy called quest party. So real simplistically Let's say that what we care about is the name of a quest It's stream identity and just tracking Who's part of who's part of our quest party right now? You know, if I ask you right now, who's in the party? And that's just the state here, but we derive this By looking at the raw data like events like members join members departed quest started all this kind of thing So at some point we want to calculate from the events we want to calculate and build up the state of this thing and The code you see highlighted It's not the only way to do this in Martin, but for very simple Aggregate projections one way to do this is just have an apply method for every event That would have every event type that would apply to what you're trying to project out Now timing wise it's something. I think it's a little bit unique to Martin Martin's event sourcing But I think everybody should get around to supporting it These projections can be defined at with roughly with three different time frames. I Can either do it live which means If I get into a case where I have lots and lots of event rights But I very very rarely do I ever try to Read what the exact state of the world is I might decide the best thing to do is I'm just going to do the Irrigation completely on the fly when I ask for it and only then in there. Well, I Try to see what the state is So the example right here, that's what we're showing. I want to reach in at this time so started the started that ID that's actually defining what my quest ID is let me You make that a little more clear Let me call it quest ID. So if I know what my stream identity is my quest party ID Just ask it on the fly go load up all the events run it through the aggregation and tell me what you get out the That's what we're going to see up here. So We should see our members should be brand Matt Paran Tom left so you should be subtracted Lorraine and lamb We run the test as always We're gonna look at the results. We should see some Jason on in the bottom right Please Okay, and there are members And there's also the name of the quest So that's a little doing it live So again, the recommendation there is maybe you do that when you have lots of rights So you want to optimize how fast can you write the data into the system? so you don't want to be calculating anything you just want to capture it as fast as you can and Just occasionally read the state and it's not that big a deal to wait a little bit to create the reed side Other thing that's maybe a little bit unique to Martin What we can now say is let's say the quest party. I Want that updated as you capture the events? That's what we call an inline projection So with quest party Now I'm going to do just a little bit of extra configuration with Martin I'm going to say I want the quest party to be aggregated in Just meaning that when we capture any events that are related to quest party I Want that quest party Rejection value updated as part of the same unit of work. What means what that means is the When I try to commit my unit of work It is going to apply the new events to the event vent table Plus the newly updated reed side document that is all going to go inside of one database transaction So at no point you have any inconsistency between the raw and read side view Let's just see if the test for this It's not super exciting because it's doing the same thing. It's just a little bit different different timing Okay, so some of the purists in the audience are probably going to get a little bit freaked out about this Some people believe this is a really bad idea but what we found with Martin users if you have an aggregate that Where you feel like there is very little chance of there being Multiple sessions hitting the same aggregated stream at the same time You may prefer to go about it this way And just update your projected values per stream completely in line So you never have to worry about any kind of individual consistency problems But that gets us right to the third Third approach and this is probably the most common way that people do projections and event sourcing is You have some kind of background process that's reading through All the events coming through and it's trying to keep up and updating the projections as it can So you commit to commit you commit the events. It's persistent in the database The background process is listening for all the events coming in and a little bit later Got an eventual consistency thing going on. So this isn't purely this is the only place where Martin isn't purely acid It's going to get around updating the read side model for you so With the price of the eventual consistency and the potential for making mistakes by showing data before something's processed if You have if you're concerned that you're doing a lot of aggregation across streams or aggregation where It's very likely you're going to get session contention on the projected values, which is pretty easy to do Let's say that in our quest party system We want to have a completely separate view of how many monsters have been slain at any one time So in that case every event across every possible stream that involves Slaying a monster would have to fetch that that one system-wide aggregate document Change it and commit it so you would have a tremendous amount of collisions What we can do is we can switch to an asynchronous model for our projections for just that kind of view Make sure I have an example of that. So we have a feature in in Martin. We call the async demon That is a little bit of Linux Linux and beyond my part calling it a demon Where I can set up Make sure we do this So I can throw a bunch of events in but I can create this long-running demon. Let's just sit in memory I'm heavily utilizing TPL data flow to be to be consistent There's a lot more complexity in here than it sounds but it's Constantly pulling the database to see if there are new events that it cares about that are referenced in some kind of projection They're part of the async demon and it's trying to make sure that it hangs around and applies everything in the exact same order So it's sitting running in the background trying to update these aggregated documents Performance-wise it might give you a big boost if you're frequently updating the same same aggregate document Just by the normal unit of work mechanics it's able to hold on to the projected data views and memory and Updated memories as it's it's updating things, but this is the only possible way you can really go about Doing a projection when you have a lot of contention I should say Didn't get into this enough but the projected documents So the reason why I think it actually made sense to add an event sourcing feature right on to Martin the projected readside documents They're just Martin documents So you can query them you can load them by ID you can query them with any part of the link provider You can do anything you want We think that the easy integration management system readside projected documents with the raw event data Capture it is a huge feature and a win for Martin versus other solutions where you may have to piece together You know a raw event store which may be saving its data to completely different data store Versus some kind of unrelated projection tool Putting your readside in some completely other way Not everybody's gonna agree with that but Many of our users on on Martin that enjoy the event sourcing support That's one of the big features and the key wins for Martin for them with that not exactly sure of the timing because we got uh Started maybe a minute late Where we're right here. That's yeah, we're right on time. Jeremy just you know, I figured we started late It sort of it got pushed around so you know, it's it's all good from here Uh, thank you, uh, Jeremy for taking the time to talk to us and dot net conf There's some questions out on the twitch channel To keep on on par with the schedule. We're gonna uh, I ask you if you can jump over there We can answer those questions you and I can kind of tag team them We have our speakers here and we're going to get ready to roll So, uh, we're going to jump into a quick commercial and we'll be ready with more dot net conf Against Jeremy. Thank you so much