 Hi everybody, my name is Paesong and I'm not sure how many of you are coming here for domain driven design. Actually, I just start the title without thinking about it. So I'm not an expert. But after I read a few articles about domain driven design, I think there's something we can share. Even we do not apply domain driven design, we are actually using it every day. So we can think about how we can learn from it and apply some of the concepts to our project. First about myself, I have been developing software for more than 10 years, like server site for 10 plus years, I was three years and one year for web development. Right now I'm doing my own project, it's called REST Studio. Later I can give you an introduction of that. So even we do not know what exactly is domain driven design, we can figure out from the name that it's about maintain model and data integrity. Because in a way, when we talk about software, we are talking about input and output, right? Everything is data. So we can take a look at Wikipedia definition of the DDD. DDD is an approach to software development for complex needs by connecting the implementation to an involving model. Here involving model means the model could be changed, there's a time. And while here it's for complex needs, I will talk about this later. Here's some basic concept of DDD, when we are talking about context, of course everyone knows we have context even for our programming languages. So it's the setting in which a word or statement appears to determine its meaning. For example, when we are talking about client object, it means different in billing system and authentication system. In authentication you just give the username and password, but if it's in a billing system you have to provide the address as well and something else. So if a developer from a billing team talking to a developer from authentication team, they can get confused because they are working on different contexts. Next is domain, of course everyone knows domain means different areas of this project, what it is in the real world, for example it's a financial domain or HR domain. Next model, a system of abstraction that describes selective aspects of a domain and can be used to solve problems related to that domain. But for us developers, model means schema actually, but for the domain expert it means the real world object. So for simplicity, when we talk about model today, I'm talking about schema. Next is ubiquitous language, language structured around the domain model and used by all team members to connect all the activities of the team with the software, actually I think everyone knows UML or internal development team you can use like Protobuf. It's also language actually, we use JSON as well, XML as well, as well as everyone in the team understand the language. Here's an example of context, we have payment system, we have merchant service and accounting service, in each context we could find the same name, the object with the same name but they actually mean different. The next thing is TDD, building blocks, it's not a full list of the building blocks, I only write down four of them, the first is entity, entity means it has an ID, you can get the entity by its ID, identify, the value object, for example you have a user, the user can be fetched by user ID, but value object does not have ID, like you have a geolocation, you have latitude, you have longitude, but it does not have an ID and you do not care about the ID as well, you never fetch a location by ID, it's usually nested into an entity. The repository, actually I do not fully understand repository yet, so from what I understand it's a collection of the same entity. If you program in Java Spring data, you will find out there is a layer called repository, it all works with the same entity. So next is aggregate, aggregate means an object that can contain different types of entity and value object. So on the right side you can see in our development world, we have database, we have data access object or we can use it as a repository, we have service layer and controller, usually the aggregate happens on service or controller level, I believe most of you already have done this. So here's an example, we have an order delivery which contains a user address and the ordered items, here the order delivery is an aggregate, the order items address, we do not care about the ID, it's just a simple value object and a user, we can fetch the user from the user repository or user service whatever to get the user by user ID, so the user is an entity. This terminology, I believe it can be used for developers communication, even usually we do not. So a few more words about the model, between domain expert and system designer, the mission is to describe how model works in real world accurately, but among system designers and developers when we talk about model and the mission is how to implement a robot system that keeps data integrity, when we talk about model we talk about data integrity. So for DDD implementation it's usually, it's not usually, you always start with model the domain first, then you start to design, architect your API or your SDK whatever, then you start to development refactoring, it's about implementation, then you have your model, you have your functions, now you do the unit testing and the integration test, when you're done this with different subsystems you have to do continuous integration as well, once it passes the integration you can package your tested domain into a model to deploy for other references, at this stage you do whatever other implementations, there is new requirements coming, you have to start to redesign your model again, so it always starts with model, the model must be tested and provided for the other subsystems to use, this is how it works usually, however back to the first slide that DDD is for complex needs, because it's very expensive, it's not for everyone, if you want to keep DDD clean you have to isolate a lot of domains or a lot of models, so Microsoft is not recommending DDD for everyone, only for complex needs, so do we still need DDD, of course actually from what I just talked about, DDD have very useful leading concept for us like data integrity on the workflow, I believe actually we are following it already, so when we are developing our REST API, which is HTTP based, if you use a JSON, you will know JSON does not have a schema naturally, so if you want to keep your model and data integrity, how do you validate your JSON data, so why do we need a JSON schema, because we need reliable JSON, and we need universal description language for JSON validation, without language dependent, like you have a library for Java, only an annotation based or whatever, we do not need that, we need something that can, not every developer can understand, so JSON schema is developed to validate JSON input, and itself is a JSON object, here's an example, what this structure tells us, so the schema says we have a JSON title user, it is an object, it's not a string, it's not a number, and in this object we need username and user ID, any one of them is missing, it's invalid, we reject it, it has three properties, so the username is a string, the minimum length is six, for the user ID it's an integer, but notice here it's not, here last line is an integer, and the format is, of course you can see it's a big int, so with this schema we can validate data, so left side is the schema, and on the other side you can see because the username is not, there are only three characters it's not valid, here for the last line it's not an integer, and the format is not correct, so it's not valid, for this one even it's missing the last line because it's not declared as required, it's valid, so now if you haven't done this before, on the server side when you receive the JSON data, you know with this schema defined, you can validate it, furthermore when we have a schema, we can describe our data more accurate, more easily, like on the left side you do not have a schema and model, you describe your user as like users, every object in this list should have the three properties, but on the other side you can see because we have modeling our data, we know we have a user which is exactly has this schema, so we can see for this API we have a list of users, and this user point to this model, so what we should do with the validation, DDD always emphasis on the data integrity, so on the client side you actually should always validate the request and response, even you will think about its option or there is a performance issue, but you should do it because data is the most important thing, on the server side actually you must do it, even you do not want to do it, because it's very dangerous to assume the request data confirms to require schema, and if the dangerous data comes in it can be passed into database directly without knowing, I'm saying I'm not seeing in one system when you have more systems, you have more databases, you just don't know which one will break, and the data if they pass from one system to another system you cannot fix it at all, so it's too risky, so you must validate the request data, even with the validation we have other validation rules like the conditional logic, like if the user is admin then you have to contain something, some properties, if it's a user has been rejected like suspended, you must contain something, but for this schema cannot help you, the only thing you can do is to consider, split your design, split on conditional schema, split into multiple APIs, like for admin you have a schema, and for other type of users you have other APIs, this is just a suggestion, I don't think it can apply to all the APIs, okay, so we were talking about microservice and how we can use it with DDD, what is microservice? A microservice appears of application functionality factored out into its own code database, speaking to other microservices over a standard protocol, so here microservices logically has its own code base, team database, there is no way one microservice access database of the other teams, you can only talk to them by interface or by API call, the standard protocol could be JSON, could be HTTP, could be IPC, could be anything you want, as long as it works, here's an example, like I downloaded from the website, so you have the mobile app, you have browser, here you have the API gateway, you have a website, the two consumers consume API from the microservices, split it into its own database and code base, actually this can be more complicated because the service itself can call API from each other, this is a simple one, yeah, and you can see the advantage here, so what's the problems? With microservices, not like before you have one big code base, if you change an object, actually you can detect it immediately, if you use something like Java, of course not JavaScript, if you use Java the compiler will tell you, the model has been changed, your API is not working anymore, but here you split into different modules, so if the other module changes the model, you have no idea it has changed, the data comes in as an old version of the data, now you cannot decentralize into the object, the system is broke, I copy this picture from another speech, so you can see we have different systems, A, B, C, D, E, F, if E is not functional, it blocks model integrity or C and F, you can see the messages from E to A is actually broken, but A didn't do anything without it, with it, on the bottom the AC means anti-corruption, on the bottom for some of systems it implements the anti-corruption layer, which converts from one format to its own format, if you implement this, this is actually a pattern, it can help you solve some of the problems, but you can see, you understand what that means, here are a few strategies we can use, first is shared kernel, when we talk about a client in accounting service, oh sorry, think about another service like building service, they have something shared like the username, user ID, if you want to separate the object completely, that means you have to save the duplicated data in different database, it can work sometimes, but in most cases we'd like to have an upper stream model, so when you create the sub-model, you have to validate against the upper stream model first, you have to always confirm to the upper stream model, then you can extend the object or use other method to extend, to implement your own logic, this is one design strategy, another is what I just mentioned, for each sub-system you create an anti-corruption layer, add an adapter, accept objects that confirm to upstream schema, but convert into its own format, why do you do this, because it's very easy to do the unit test, so with the data samples, if anyone changes the data schema, they have to provide testable data samples for unit tests, so you use the data to test against your own format or your own adapter, you will find out the problem immediately, now why continuous integration is important, because you don't know when, you don't know how a sub-system suddenly gives you the wrong data format, so for each change, please run the continuous integration, so next is, we have talked about so much about the schema, so we have mod APIs, we have mod systems, now it comes to how I can find the schemas, how do I know which schema I should test against, how do I know which system has changed, I think most of the developers prefer code first, they always code first, then there is something wrong, come back, we code again, but I think it's not the best way to do it, you actually should come out with your spec first, why, because you have the spec, you have the interface, everyone knows you have these changes, they will validate your spec first, and they can fit back immediately, this enable fast iteration, so you will, like I have a friend working in a company, his colleague is in USA, he works here, every time he finds something wrong with their API, it takes two or three days just for one run, so we need fast integration for microservice, especially for microservices, and when we have the spec, it's a two-way agreement, which means the producer and the consumer both know this change has happened, we both agree, this is the spec, this is the API we should follow, my job is to follow the spec, not calling you or email you every day, one strategy to do this is to provide the mocking data, because actually when you're mocking data, when your mocking data is well designed, they actually cannot tell it's real data or it's a mocking data, right, so when you mock the request response data to validate the data structure, you can avoid possible errors, also for the clients, the API consumers, they actually can find out the problem easily, because they usually prefer to apply the data into the UI, sometimes you have to look at the UI to find out a photo is missing or something is missing, yeah, to do this, after you're mocking this, you have to come out with spec, which is actually the schema and the document, then both the consumer and the producer agree with it, we can start to code, because now we know if my code works with the spec, it's not my problem anymore, yeah, the spec becomes the standard to evaluate a client's, a client developer's responsibility, it's a both, it's a two way agreement, okay, so for the rest API, we have the open API, how many of you have heard about the swagger, I think many already heard about that, it's also called open API itself, it's actually a JSON schema itself, so beyond the model, you create model, you also can create the API with the same JSON schema, here's an example, I'm using the YAML format because the screen is too small, so on the left side you can see when you describe an endpoint, the URL is patch, the method is get, it produces application JSON, the response status code should be 200, the result is a list of paths, the schema should be a array of paths, the paths is defined still using the JSON schema as I just described, so here, we wrap up, everything is about maintain data and model integrity, validate your input and output, all this, and spec first, try to come out with your mocking data and make it work first, and consider open spec, open API spec for rest API, I think that's about it for the speech, any questions? So when you design the whole system, I think it's not easy to define the complex boundary, so you see how many services in your system, so as your example, the counting, so for example, in your system, you have locking, you have to pay business, many business, so when you design the whole system, how can you design the different domain, how do you design, how many services in your system? That's a very general question, I think actually we have to look into the requirements first, because you are talking about different systems, different type of vendors or clients, so you have to- Yes, it can be very complicated. It's very hard to answer this without looking at it, but I think for example, there are different patterns for that, like factory, if you are talking about the create object, because you have different types of objects, you're considering different objects, they have something in common, so you want to design your database for performance or whatever, you should centralize your model creator, like make it its own service, it's called a factory pattern, but I'm not sure I can answer the rest of your questions right now. Let me pick that up, so the funny thing about the main dream design is that you need to become a domain expert, so you design a new system for directly, not the illegal one, you probably spent a few hundred dollars making a pharmacist drunk to interview them, what are they doing and then you will learn okay, there are the vendors and then there's delivery and then there's verification with the doctor and so by learning about the business, you will learn what are the objects in the real world and that become your object boundaries, of course when you push it further, then a lot of these objects you initially found will actually be facades to smaller areas, but the only way to get domain group design right is become a domain expert, so you create an application where artists can share performance events, you hang out with an artist, otherwise you don't understand what's important and that's the whole idea, it's like that we move out of the domain, we are so cozy with like tables and rows and objects and all that and deal with the real life, I recommend the artist one, that's good fun. Any other questions? Okay, so I was talking about the spec mocking for a reason because I'm going to introduce a product of mine, it takes a few minutes, so right now our workflow is like always code first, then we have to deploy for the consumers to consume, then we have to finish the document, then the consumer feedback, if there's some problem we back to code, deploy, document feedback, it's like a cycle, it could take like days to start a project, however the problem is we thought our design was a good idea, but we spent days or weeks and the prototype it turns out is not working, so if only we can prove the idea faster, so what if we can speed up the API development so that the server developer and the client developer can work together at the same time, they start at the same time, we do not have to wait for the API to start, so what REST studio does is actually it enables you to design, deploy and document REST API seamlessly in a few minutes without coding, so you only design, you do not code, so like what I said, you design the spec, you make sure it works, then you start to work to code, so REST studio contains a few modules, modeling tools, document tools, request executor, you can think about postman and there is a smart mock server, it takes like one minute to start your project, one minute to create your input, one minute to update your mock request and response and that's it, and by the end of the five minutes you have a live mock server, you have swagger compatible documentation and you have a powerful API explorer, wait, I'm not sure this works or not, okay, it's recent, like we want to create a blog API, it's Jason, so request accept and response is Jason, we assume that our code will run in our localhost, then it's created, now you want to create your endpoint first, there is a CRUD creator, you provide the path, the id name and also the object name, it enables you to create your API image with one click, so now I want to bring up the server, here you can enable the cloud sync, oh okay, with this cloud sync is enabled, everyone can access it with the id, then you have your mock server to create the environment, the environment is should be set as default environment, you can see you have a global variables, now back to the, you can run it already, it's online, it's live, but it's too slow, but it's not found because you haven't pushed your mock data yet, you haven't defined it, so you want to define your mock data, here is data, like when we create a blog, it should have a status, success, we all agree with that, right, then we should return and list the blog id, right, you can generate the schema, both you can label it as required, you upload mocking responses, back to your executor, it's live, you can access from your mobile, whatever, with this base URL and this, now you want to construct your request data, like title, the first blog, content, hello, word, you can save, oh sorry, first save first, then you have the data, remember what I said, next step is schema, so you'll go to schema editor, extract data from the request, generate its schema, label it as required, you upload it, now remember the content is required, if I delete it, I run it, the mock server actually tells you you're missing properties, so the mock server can detect your errors, of course if you do not want to do it on the server side, you can validate it on the client side as well, with this and you can also preview your request if there is something wrong, to demo a bit more about this, I make it back first, make sure it works, yes, so now I want to say the title must be, the minimum length is like 32, you upload your mocking responses, now you'll go and try again, it tells you it's too short, fails to validate that, yeah, so, oh I forgot, you also can simulate different types of status code, like if you do not want 200, you want 201, you create a 201, like created, true, but as you can see the default is 200, you can trigger the 201 in the query, oh, I'm not sure it's enough or not, okay, so this is 201 and created as true, and if you disable this parameter, it returns the default response, that's why I call it smart, although it's not smart enough for me, right, now we have an API explorer, a live mock server, you have the document here, this is what your API consumer will see, they have the attacks, all the required types, descriptions, examples, actually this is a swagger compatible, so later on it's still under development, so you can export as a swagger project, you use that, you can generate your server and client-side SDK, and also there are a lot of open source tools for swagger, that means you can edit here and you can do whatever swagger can do as well, yeah, so last I will demo the cloud scene, right now it's I copy this value then go to, we assume this is another user, stuff, oh I haven't published yet, sorry, I have to publish, the design follows that you have to commit with change log, so everyone knows what you have changed, you have to do that, it's not a sync automatically without knowing what has been changed, it's here, so for API consumers they can change and do whatever their request will be evaluated on the server side, why so slow, okay, it works, this is what they see, the last one I want to demo the import swagger, this one is actually from the swagger editor, they have different examples, so you can see, I'm not sure you have used it or not, but the network is too slow so I'm not going to load it, this is the definition of a swagger project, it's very long, okay, see, it's here, you can actually execute it, so that's it, so what it does, it enables you to start your project and finish your prototype and validate your request response very quickly without coding, the spec is here, you can do whatever based on the spec, that's for this one, right now it's in alpha stage and it's going to be beta in like two or three weeks, right now we still have a few breaking changes, I'm going, yeah, that's it for today, I'm going to publish this URL later in the Slack group, yeah, so everyone can try it out, let me know, yes, I hope it can start up, hopefully, yes, you have questions? Do write operations and view operations use the same definition or the separate write models, for example, when you create a new block example, the actual block model has a timestamp, but when you create a new model, you might just specify a subset of the actual model, exactly, that's what I call conditional skew, right, so you have the same object, but actually they are conditional, if this is like, yeah, yeah, right, so you have to decide which way to go, you can use the same one, but the required one could be wrong, and some logic is conditional, it is correct in this condition but it's not correct on the other condition, so if you want to make an unconditional schema, you have to separate them, and you have to separate the API as well, yes, that's how it works, okay, thank you.