 So we are going to do a Q&A panel with Paula and Eric. So we will start with an icebreaker question for Paula. So I've seen that in celebration of Clojure, getting the Clojure Java math namespace, you've ported it over to ClojureScript, which is super awesome because we love to see you do one more ClojureScript stuff. You said that unfortunately run into the problem that the JavaScript math isn't as capable as the Java math though. So you've had to re-implement a lot. What was the toughest part of doing this? The IEEE remainder function. There's a function for dividing one double, 64 bit double by another one and then finding the remainder value from that as accurately as possible. And that was really tough to get right. The code was available in C on the JDK because it's released as GPL. Anyway, I ported it into JavaScript and then once I had it working right there, that helped me get it right in ClojureScript. There's a lot of bit operations and there's expectations of having access to different word sizes and you can change the pointer types. And this is what C was doing internally and the way I had to make that work in ClojureScript was really quite difficult, but yeah, I've got it working. And yeah. Okay, let me do like the icebreaker for Eric as well. And we want to ask, what is your favorite dish to cook and why it's not cookies? Because I would eat too many cookies. That's the answer to the second question. What do I like to cook? I like to cook gumbo. I really like to gumbo. It's fun. There you go. It's a nice process. And what are the ingredients for that? Well, first you start with a roux, which is flour and an oil and you toast the flour in that. Gumbo's have a pretty dark roux if you're used to making a roux from French cooking. It's much darker. And so it kind of has this exciting aspect of testing your cardiovascular health because you can burn it and you want to take it right up to the point before you burn it. So it's kind of a game of chicken with your flour. Then you can put whatever you want, but I tend to put in okra. Gumbo's one of those kind of like empty your fridge of all the little ends and bits that you got left over from other things and just throw it into a big soup. Well, hopefully next time in the New Orleans you will cook some for me. I'm holding you to that. So we have a hand raised egg Davis. Yeah, I asked this on the Discord but then I was reminded that you all are enjoying questions being in person. Erica, I was thinking recently about causal loop diagrams. I don't know if you've run across those. I just clicked on the link that you shared. I have not come across them. So I'm going to check them out. Yeah. Are these, these are kinds of like systems, analysis, feedback loop kind of things. Exactly, yeah. And I mean, mostly I've seen them as like ways of exploring physics stuff but just recently occurred to me that it could be really interesting as a way of modeling business domains. I'll check them out. Thank you so much. Okay, so let's get Paula back in here. Ray on the Discord asked, Paula, can you tell us something about your graph database, ASAMI? What was the motivation behind it and what sorts of problems it's a good fit for? I would like to add how long have you been working on it and tell us a little bit more about the name. It's a lot of aspects to that question. I had been working on a previous RDF database called Moldara for over 10 years that was written in Java and I lost interest over time because it was written in Java and I was into closure by that point. I had an idea of re-implementing in closure but hadn't really got to it until an element that I'd written for Moldara was a rules engine which does this bottom-up evaluation for Datalog and I had this idea that I could re-do that for any graph database if I just hit the database behind a protocol. So I tried this with Datomic and showed my boss at work who said, you should try this at work because we need to do some referencing and he didn't like the idea of working with a commercial system. He said, I could take the rules engine which was open source and do, I needed to talk to an open source graph database. We weren't aware of data script and so I started doing something in memory and that became a summary and then I kept adding features to it in my own time including the storage layer which I did last year during a pandemic and it sort of, it grew a lot from that. So it's like Moldara but it's entirely in closure and it's taken on a lot of the new indexing ideas that I've had in the last 20 years. As for the name, we had a number of projects at Cisco where my manager was really into Avatar, the Last Anteberna and the SQL Cora and so a number of projects were named after characters in those shows. Well, the character Asami was Asami Sato while some of the other characters had these like superpowers with the bending. She didn't have anything like that but she knew technology and she would build things for herself so that she joined in with her friends and I liked that idea because I didn't have a graph database and I built one myself. Yeah, that's great. That's one of my favorite fun facts about Asami. Let's go back over to Renzo to hear more from Eric. Yeah, so I have a question for Eric. Do you think that the traditional roles found in most agile software companies are equipped to handle the domain-driven model that you are advocating for? And could we need to add additional supportive roles between product managers and technical leaders? Wow. I have not thought much about how this fits in with the sort of black box ticket machine that we've created with the current agile system. I'm not a big fan of the way I see agile working so I'll just, I'll put it that way. What I see is that business people, project owners or whatever, product owners, whatever they're called, just add tickets and then the programmers just do the tickets and there's very little because the program, it's easier, the asymmetry of it. The programmer is working longer on the ticket than it took to write. So it's always easier to get a longer backlog than you can actually finish. And so the programmers never really have time to get a global view of the tickets coming in and like what model are we actually making? And so I don't know if it would fit into agile at all to say like, hey, let's take a break and do some design work and some sketches and like maybe your, all these tickets don't make sense anymore. So I will elaborate, Eric, because that was actually a question for me. Oh, okay. So I think that traditionally we have like the product people and they know all the product stuff and they don't care or know about the code then we have all the code people and they know all the code stuff and that's all they care about. But not a lot of companies have like the glue holding them together. And a lot of times, software engineers or customer success people will try to fill those roles but I guess I think your domain driven model kind of advocates for the requirement of an additional role in a company that's like between understands technical but isn't leading technical and understands product isn't a product manager but can help those two. So, I mean, this is something that I don't think I'm prepared to discuss fully but there is a great book called The Inmates Are Running the Asylum and it's about the problem of just having these two roles of like business and dev and it argues for a third design role because the business is they're really trying to maximize profits, right? So more income, less cost and the programmers are kind of trying to make their job easier, you know, do the job but do as little as possible and there's no one advocating for the user really and the user is the kind of the person that the model has to serve ultimately. It does help the programmers, right? By having it making, if you have a good model the model that actually serves the users it's easier to write code that serves the users. If your model is different from what the users actually need then you're gonna have to write all these complicated code to like adapt it all the time and that's what's gonna lead to all this complexity. So I think that that's a good direction like having this third role who's kind of advocating for the user and then that lets you have some sort of team of maybe a special team of three stakeholders who can come together and talk about the model outside of the code. I think that's another problem in Agile is that the programmers are asked to do both the low level like let's make these tickets happen and the high level sort of evaluate these tickets and like understand the context and switch between that really quickly. And so you never have a, like you're going from this like engineering, like just get it work at like whatever it takes, get it to work and then you have to switch to a wait maybe we don't wanna do this, what's the context? Like, and that is very hard for your brain to do that. And I've seen it happen in companies where it's like a programmer will be coding up to the wire. Now we have a meeting to like evaluate these tickets like your brain needs an hour at least. Maybe you even need like a week from if you've been doing coding a lot. So I find that that's another problem like there's a human aspect, the psychological aspect missing from Agile. Yeah, the cost of context switching is certainly a cost me underestimate. Okay, going back to Paula, we have a question from Jacob on Discord. He says, comparing the performance of retrieving a whole data entity, for example, a person from an RDF database versus an RDBMS is the former significantly slower because it has joined on each attribute or is it just as fast because all the attributes are stored next to each other in the EAV index? It's a little bit slower, but in general, your bottleneck is in your storage. And these days that's often as likely to be in the cloud as it is locally. And so the work to bring those things together into a local entity is trivial in comparison to waiting for this stuff even coming off this. If you have a lot of things cached, yeah, there's a small performance cost there, but generally when you retrieve this data, it'll usually be co-located. You can bring it in, you can restructure it and return it. If it's in memory, it'll be a little bit slower. And if you run it through a criterion, you'll see that difference. But in general, you're going to see records come in and be restructured as an entity. At a similar pace, it's the tertiary storage, which is the real cost. And their locality is normally all right when it comes to that tertiary storage. So we can perhaps take the next question for Philippe. Hi. This is a question for Eric. Hi, Eric, great talk. Thank you. So I want to ask a bit about the process because what you've described is obviously a process by which you come to certain results. But a big part of how this sort of, this thing ends up, you're having the need for this process is the fact also that there is a disconnect. Like the people that have the model, maybe they're not talking all the time with the people that don't have the model in their heads. So even in the example you gave, it's like a multi-step process by which at some point someone realized, maybe we should look at the model. So my question is like, how do you actually, in what ways have you had success? Like actually putting down this model, collaborating with other people on this model and actually iterating over this model in the times that it's actually important to change it and update it? So these are all really good questions. And it's a little bit outside the scope of what I had in mind. So, but it's really good question. So now I'm thinking maybe I need to talk about it, but I'm also thinking maybe I focus too much on process and not enough on the sort of underlying groundwork of what you can even do at a domain model level before, and like an abstract level before you get into code. So thank you, this is really good information. So I do, but I do wanna answer your question. So what you asked about what successes I've had. So there are successes like when I'm working on my own definitely I will realize, oh, this model needs to be different. And then I go back to a more abstract level and I can work there. Of course I'm working on my own project like I don't have to answer it to anybody. In terms of like on a team, I've seen it work in the small. So maybe you realize that this a small part of your domain could be different. So for instance, maybe you're doing some kind of user interaction in a GUI and you realize, oh, if we just kept a log of these instead of mutating stuff in place, that would make all these things that we have to do easier. And so then you, that's a small place and the GUI code changes so frequently anyway no one's going to be bothered if you'd like change a bunch of callbacks to do something different, right? In the large, I think that one of the jobs that I need to do is to make it apparent is to make the argument for how valuable this can be that it can speed up your development certainly but it can also be because software is like basically running businesses now, like the business, how it works is like in the code. I think that it can give you a real business advantage apart from just like the speed of development. I think that it can give you a clearer view of like it can give the business, the whole company a clearer view of the system that they're working with. And I like to use this example of like double entry bookkeeping, it's like, can you imagine a bank with a different model who's just like flailing all the time because they're not keeping books that well. And then someone comes up with double, the bank that comes up with double entry bookkeeping is like beating them and it's simply a change in model. The whole business is based on a different model. It happens to be encoded in their information system but that's happening more and more. And I think I need to come up with like find some real world examples where someone with a better model actually succeeded in competition because the model was different. And it wasn't just, oh, we were swamped by technical debt it's more like the model made their business better. So thank you so much, Felipe. I hope that answers it. And we can talk offline, we can talk in the chat if you want to go deeper. Sure, let's do that. Thanks again. Thank you, Felipe. Absolutely, Jordan. Any more questions for Paula? Yes, so I'm sure there are many people that are looking for opportunities to contribute and I'd like to give you a chance to a space where you have next steps you'd like to suggest to people where they can learn more, perhaps previous talks you've done in the past, good starting points for people to learn more about Asami and what you're doing. I've written a lot on the Asami Wiki. I've tried to put a lot of effort into like describing each of the different elements, how to go about using them, what the structures look like. There's even a number of pages on what the architecture is and how things interact. It's not perfect, but there's only so many hours in the day. Another place to go is Datomic. They have extensive documentation. They have a very fast-moving project and a lot of people working on it. So I have seen the dots get out of sync with the project on occasion, but it's generally quite good. So these are two really good sources. Well, I like to think that the Asami source is a good one. I put pictures in there and things. But while Datomic doesn't present itself really as a graph database, and it's not particularly strong in that way compared to some which are out there, it is a graph database and you can use it as such. So in general, looking for graph database resources can help, but I think working through the Datomic documentation is probably your best bet. And then if you learned about graph databases, you could build on top of that. Is there a second part to that question? I think you answered it. Sorry. Thank you. So let's see. We are approaching the end of the slot, but we should have time for another couple of questions, I believe. So for Eric, we have, well, I can pronounce that Kingsnooks, Kingsnooks, something King. Eric, is there a standard timeline, for example, two weeks to a month during which modeling, mass, mature, once a project requirements are completely understood? This is the first part. And then there's a second part. Eric, would you describe or to what degree to the idea that a design be centered around areas of volatility? Thank you. So the first one, Eric, again, is during which modeling mass mature, so what is the period where modeling mass mature once a project requirements are completely understood? Right. Again, this is a process question. And now I'm very much regretting talking about process. It's not like I don't want to talk about process as much as it seems to questions are fine. I want to talk about stuff like the sizes, that's an alternative. You choose one of these three sizes and that is an abstract thing that we can do before we jump into code, right? And that I want to catalog all these choices, these possible ways of modeling different things and then also talk about how those can be implemented in code. So, oh, closure has these seven ways to implement alternatives and Java has these three ways. And I want to do stuff like that. And so it's much more about sort of the Lego bricks of domains, of domain models, how to analyze your domain to find what those things might be and then how do you translate that into a language. The idea of process like, oh, have a three week sprint and then write down all these things on sticky notes and like that, I don't think I'm that good at that. So I don't want to touch that. The thing about whether to design it around areas of volatility, I think that that's very important and a lot of architects, software architecture experts talk about that, how one of the, this is one of the key areas of, key insights of David Parnas in talking about modularity is you want your modules to encapsulate this volatility. And an example of this is, well, the wrong way to do it is you break up a house thinking about the areas of functionality. So you think of stuff like, well, there's the kitchen, that's where you make your food, there's the bathroom, there's the bedroom. And so you think of those as the modules and then you like connect them together with doors, right? But really, when you look at your house, the modular stuff is different. The module, if you look at the volatility, you say, oh, wait, we need to be able to plug in different appliances at different times, right? You don't want to build in, I mean, they used to do this and they've learned it's a bad idea. Like you build in your vacuum cleaner system into the house. No, you want a modular vacuum that has a plug that's also modular and you can, you know, different, you know, that's the interface. It's like a standard interface that allows any module to be plugged into the house. And then you have like your plumbing, right? And that's like a different module. Like you want to be able to put different fixtures in without like ripping down the whole house and changing all the walls, you know, the plumbing and the walls. So looking at stuff in terms of what changes, like, you know, your furniture, it's very modular. You can change out your furniture, you don't have to touch the house, you don't have to rebuild anything. So to look at the volatility, I think is like one of the first things you should do. And that's kind of just architectural advice. It doesn't speak so much about the domain model, you know, the model of the house. I brought up volatility mostly because when you're implementing it, you have to make this choice. There's different ways to implement alternatives. You can make an enum, you can subclass an interface or you can make it values that are like strings that you can put in a database and all of them have their uses. There's not like one that's the right way to do it. And what, but how do you decide between them? It's all about volatility. And I think that that's something that's just not discussed that much, you know, I think that there's a tendency to say, like, well, let's just subclass everything and enum is, oh, that's just for like simple things, like simple strings that instead of using constants, like I think that that's the wrong way to look at it. You should look at it much more like you have to change an enum if you wanna add or remove something or change anything. You actually have to go open that code and change it. Whereas a subclass is open, you can just add a subclass and then the runtime stuff, well, you just add a new row in the database. Like you don't even have to touch your code. So yeah, that's the answer to that. Okay, great. So we are getting close to our break before our next session here. So we have one last question for Paula from Ray on Discord. And Ray says, when I see Datalog, I think of prolog and logic programming. Is there some synergy between Datomic and Core.Logic? Can the two be used together to solve more interesting problems than just data retrieval? I haven't done a lot of work with Core.Logic. I do know that David Nolan put some effort into connecting it to the assertive statements for connected to a database, as opposed to simply being in memory. I'm not sure that that would be the best approach to handle a lot of scaling issues. The sorts of rules which I was describing that Datomic can use, I think will be much more performant, but it's harder to apply than Datalog, I think. And I mean, that connection to logic programming and prolog is quite real. It's specifically designed to be like prolog over a corpus of data, as opposed to a space which needs to be searched all the time. There have been numerous attempts in the last 40 years or so of trying to integrate the two. Certain things have been done that in this top-down logic programming where solutions are searched for, the way that prolog does, the way that systems like Minicamron or Core.Logic do, these can solve certain sorts of problems while Datalog can't really approach those. However, Datalog has these restrictions on it which let it work really, really well with databases, graph databases, especially, I think. And while there's similarity there, there's also things which one can do that the other doesn't really have a very good handle on. Personally, my experience has been that logic programming hasn't required the scalability that databases provide us. And well, I mean, when I say logic programming, the complex stuff that Core.Logic and that prolog are really strong at doing, I haven't seen them applied to really large purposes of data to great effect, while if you can restrict your logic language down to what can work in Datalog, that works against databases, exceptionally well. There hasn't been a lot, there are a few cases, but there haven't been a lot of instances where you want that processing power of, say, prolog or Core.Logic being applied to massively scalable data. And the fact that those two don't often need to intersect has been fortuitous, but yeah. So I know Datalog can be put over the front of a database and David Nolan has done that. Not that I'm sorry, Core.Logic can be done over a database. David Nolan has done it, but I haven't seen it being picked up in the news. Okay, great, thank you Paula. So we are at the end of the Q&A panel and we would love to thank Paula and Eric for speaking today. And if you have any more questions that Paula and Eric are both very active in the community, please reach out to them on Slack or email. And I'm sure they'd be happy to answer any questions you have.