 In Unity ECS, a system is a class inheriting from component system, and we can override its event methods, which include onCreate, onDestroy, and onUpdate. The update method is called once per frame, and we can use attributes to specify when the updates are performed relative to updates of other systems. In this example of mySystem, the attribute says that it should update before other system, and otherSystem, meanwhile, says that it should update after mySystem. These attributes are, of course, logically redundant, so we only need one of them. If though we were to specify a logically impossible order, like both updating before the other, Unity will throw an exception upon the creation of these systems. In cases where we don't specify which of two systems should update before the other, Unity picks the order for us. In general, we should leave the update order unspecified, except where we really need this control. The systems of a world are placed into logical groups. When a group updates, all of its systems update. We can create whatever groups we like, and we can create subgroups within groups, but three groups are created by default, an initialization system group, a simulation system group, and a presentation system group, which are updated in that order. In this example, both systems are explicitly added into the simulation system group, which is actually the default. In the editor, looking at the entity debugger window, we can see a list of all the systems in their execution order. You can see here the my system and other system, along with other default systems, which we'll discuss later. If we look at the full player loop, you can see where the system group updates execute relative to the mono behavior events. As you can see, the initialization system group is updated just before early update, the simulation system group is updated just after the regular update, and the presentation system group is updated just before post late update. In a system, we can create and use entity queries, like we've already seen, but when we do so, we should create the queries using methods of system itself rather than the entity manager. This is not just more convenient, it allows the system to track which queries are used in the system. A system with no queries will update unconditionally, but as soon as we add at least one query to a system, the system will skip updating when no entities match its queries. Rather than wastefully create an entity query every update, we generally should just create it in the system's onCreate method and reuse it every frame. Be clear though that the chunk and component arrays we get from a query should not themselves live beyond the end of each update because the underlying data may change. An entity query builder provides a more convenient way to iterate over components of a matching archetype. Every system has an entity query builder field named entities, and when we call its forEach method, we pass in a lambda. Using introspection of the lambda parameters, forEach builds an entity query and calls the lambda function once for each entity of the matching chunks. In this example, the query matches chunks with component types compA and compB. The lambda function increments x of the compA value with x of the compB value, and then it removes the compB component from the entity. There's quite a bit of magic going on here, for one thing, creating a lambda every update would normally create wasteful garbage, but the Unity compiler especially avoids this. Second, the components are passed as refs so that mutations to the parameters aren't just local to the function. Third, when we mutate the component, we are actually mutating a copy, but when the lambda returns, forEach copies these values back to the actual components stored in the chunk. As you can imagine, this arrangement is less optimal than directly iterating over the components in the chunks. The reason forEach must make these copies is because creating entities, destroying entities, or adding or removing components are all structural changes. They require modifying the chunk structure or moving entities between chunks. Once we make a structural change, any arrays of chunks or components that we created before the change can no longer be used even if their data was unaffected. To work around this problem without using forEach, we can record the changes we want to make with an entity command buffer and then play back those changes later. These entity command buffers are created from special systems called entity command buffer systems. By default, the buffer created from one of these systems is played back when that system updates. Once played back, an entity command buffer cannot be used again, so we generally need to create new buffers every frame. In this example, we first get the end simulation entity command buffer system, which is created by default and is updated last in the simulation system group. From the system, we create an entity command buffer, which has most of the same methods as an entity manager. Here, we use the buffer to create an entity and add a component, but the changes are not enacted until we call playback passing in the entity manager. It's an error to call playback more than once on the same buffer, and so to prevent the end simulation entity command buffer system from calling playback like it normally would, we set should play back on the buffer to false. The iBufferElementData interface defines a component type that stores an array of values rather than a single struct. These arrays have a fixed number of slots that are stored directly in the chunk, but these arrays can grow by storing any number of additional slots outside the chunk, hence these are called dynamic buffer components. In this example, we define an iBufferElementData called myBufferElement with two int fields. The internal buffer capacity attribute specifies that dynamic buffer components of this type will store five of these elements directly in the chunk for each entity. Along with the five elements, the component stores a length and a pointer. When the component's capacity of five has exceeded, more memory is allocated outside the chunk and the pointer is set to point to this memory. So in code, we add a dynamic buffer component to an existing entity using the addBuffer method, or we include the component upon an entity's creation. We use getBuffer to retrieve a dynamic buffer from an entity, and we can add elements to the buffer with the add method. Note that after adding one element, the length is one because it started out at zero even though the internal capacity of the buffer is five. If we were to add more than five elements to the buffer, the extra elements would be stored outside the chunk. Also note that when we retrieve a value from the buffer, we are getting a copy of the struct, so here to actually mutate the buffer, the element must be assigned back to the same index. Finally, when we remove the element at a given index, all elements following it are shifted to fill in the gap. When working with chunks, we can access the dynamic buffer components, much like we do for regular components, except we call getBuffer accessor passing in an archetype chunk buffer type. To access entities in their components and jobs, we use special job types and we use job component systems to help us chain appropriate dependencies between our jobs. An ijobChunk iterates through chunks of a query, processing each chunk in its own subjob. An ijobForEach iterates through entities of a query, optionally processing the entities in multiple subjobs. The ijobForEachWithEntity does the same, but accesses the entity IDs as well as the components. In this example, we define an ijobChunk which accesses the compA and compB components of the chunk. The execute method receives three parameters. The first parameter receives the chunk, the second parameter, chunkIndex, receives a value that should be using conjunction with concurrent entity command buffers, as we'll explain later, and the third parameter receives a cumulative count of entities in all chunks with smaller indexes. For example, if the chunk at index 0 has 20 entities and the chunk at index 1 has 30 entities, then the first entity index value for the first three chunks will be 0, 20, and 50, respectively. Anyway, this parameter is not really useful in most cases, but I mention it for completeness. To use this job, we assign an archetypeChunk component types to its fields and we call schedule passing in a query which determines which chunks the job will process. When the job runs, execute is called once per chunk and each execute call runs as its own subjob, only once all the chunks have been processed is the job finished. Much like how the job system safety checks ensure that no two concurrently scheduled jobs access the same native container unless the access of both is read only, the job system safety checks also ensure that no two concurrently scheduled jobs access the same entity components unless the access of both is read only. So here, if we schedule another instance of my job while the first is still scheduled, the safety checks will throw an exception because both jobs access the compA component in the same chunks. The two jobs conflict and so we must either complete one job before scheduling the other or schedule one job as a dependency of the other. Either way, it should be determinant which job always finishes before the other starts. Note that the compB component is marked read only in the job and this is what tells the job system that it's okay to concurrently schedule a job with other jobs that also access compB as read only. When we define the archetype chunk component type for compB, we pass Boolean true as the argument and this makes the component array that getNativeArray returns read only such that attempting to mutate that array will trigger an exception. So when we want to access a component read only in a job, we effectively should specify that the access is read only in two places, not just one. Job component system is a variant of component system that helps us configure job dependencies across systems. When we use a job component system, we should create queries and archetype chunk component types for our jobs using methods of the job component system rather than the entity manager as we have done so far. Here for example, we're calling the get entity query and get archetype chunk component type methods of the system not the entity manager. Creating the query this way registers it with a system and it is assumed that jobs created in the system will use the queries registered with the system. On updates of job component system expects us to return a single job handle representing all jobs created in the update. So if we create multiple jobs, we should make the jobs dependencies of each other or use the combined dependencies method to combine them into one job handle. The returned handle will be completed for us at some point later at the very latest before the next update of the same system usually though completion happens upon the next structural change because to avoid conflicts structural changes trigger a hard sink point. These sink points complete all outstanding jobs that touch entities and components of the same world. The job handle parameter of an update is passed a job handle representing all jobs returned by other job component systems which may conflict with the queries of this system and so to avoid conflicts all the jobs we create in the update should depend upon the input job handle either directly or indirectly. In some scenarios this does create dependencies that aren't really necessary and thus may create inefficiencies but we can always work around these problems by splitting the jobs created in one system across multiple separate systems. I job for each is an alternative to I job chunk that is sometimes more convenient and despite the name it doesn't share the performance drawbacks of the entity query builder for each method that we saw earlier. This example implicitly creates a query that matches chunks with both comp a and comp b. The first attribute excludes chunks that have comp x and the second attribute requires the chunks to also have comp y even though we do not access the comp y components in this job. For each entity the components we actually access are passed by rough to the execute method such that mutations to the component parameter directly mutate the component in the chunk. Note here that the comp b parameters marked read only so we cannot mutate it. The I job for each with entity variant passes the entity id to the first parameter of the execute method. If though we want to make structural changes as we iterate through the entities we must use an entity command buffer like we've seen before but because the buffer in this job may be used concurrently in separate subjobs we must use the concurrent form of entity command buffer. Here we see how this job can be created and scheduled. We create an entity command buffer from the end simulation entity command buffer system but we make it concurrent by calling the to concurrent method. When we schedule the job we give it the input dependencies of the job component system like we always should but notice we pass the system itself as the first argument rather than a query. The schedule method itself uses a system to create the query using introspection information from the job. If instead of the regular schedule we call schedule single then the job would be processed in one logical subjob rather than split across many in which case we wouldn't need a concurrent entity command buffer because then the buffer would only be used from just one thread. When an entity command buffer is played back it may make structural changes but rather than trigger a hard sync that unnecessarily completes jobs unrelated to the changes entity command buffers only complete jobs that are explicitly registered to require completion before playback. By calling add job handle for producer here we're telling the entity command buffer that this job must be completed before the buffer playback. Finally there are a good number of odds and ends I haven't covered here most of which I may not cover anytime soon because they are still under documented and highly subject to change. In some follow-up videos though I'll walk through some more practical examples and we'll look at some of the packages Unity currently has in development that are related to ECS including hybrid dot rendering unity.transforms and unity.physics