 I guess the most important thing to say is like, this is not the draft spec for presentation four. Can you all see my screen? Yeah, yes. Thank you. Oh, I'm making zoom window bigger. That's not right. I want to make this window bigger. That's better. Right. Yeah. So, you know, because obviously if we'd sort of fight with a clean slate and I started writing the presentation for spec, we wouldn't have got anywhere in the two days we had available to us. So thanks to be what we're writing is a kind of they're like fragments or snippets from the eventual spec, but relative to the new stuff. So it kind of assumes a familiarity with presentation three and it doesn't define all the concepts that you might find in presentation three. So it's like little kind of snippets of new stuff that might have come from that spec. But we also think we're going to reorganize the spec to make annotations and painting annotations more explained. I was going to move the zoom stuff over the other side. Right. So yeah, I'll just start at the beginning. So one of the things, obviously we introduced scene. So, and one of the things there's an issue I posted on GitHub the other day about whether as we introduced scene we wondered whether, you know, because we're making a step forward in dimensions and then adding a new class as a content container whether we should also take a step back and add a timeline class. I'm not going to go into that because that's not relevant to what we're talking about but it's just a little bit of background. So yeah, so we, you know, I'm not going to go into this in great detail because this represents established stuff that the TST has already decided on. There is a class called a scene and it has a right handed coordinate system and it looks like that. And we use the scene as a container for content via the mechanism of annotations and we can place 3D models in the scene and we can also place lights and cameras in the scene and a scene can have multiple lights and cameras and other resources. And it could have all the kind of commenting annotations or even transcriptions or translations or any other kind of annotations that other existential life has. So nothing odd here. First new property here is background color of scenes. That's a new property added to the scene class. And also we will retrospectively add it to the canvas class for reasons which will become apparent when we want to put canvases into 2D scenes. So that's about as far as we're going in this draft at least towards any kind of skybox or background concepts, just color. So right, so we've defined a scene. Now let's start thinking of putting things in scene. And these are familiar concepts. You're gonna have seen these, you know sometimes the documentation here as a first pass is like a kind of, you know even as a kind of rephrasing of something you might see in 3JS. So we have two types of cameras that we can place in scenes. Perspective camera and orthographic expected. They have near and far properties just like they do in 3JS and many other libraries that define obviously where, you know content within the scene, you know the actual range of content that the camera will show up. A perspective camera has a field of view and there's a diagram crib from I think Wikipedia just to define near and far. So again, nothing unusual for anyone familiar with 3D concepts. Right, then we start getting into behaviors of a triple F client. So if a scene does not have any cameras in it as placed into the scene to annotation then a client or viewer must provide a default camera so that there's something to look at. But that's up to, you know we think we shouldn't impose any constraints on viewers that it's up to them to provide cameras for their scenes. And similarly we'll see the same is true for lights. If the publisher author of the manifest does not include lights and cameras it's up to the viewer how it lights and views the scene. So yeah, so here is a typical perspective camera as a JSON object and lights. Again, familiar from other 3D libraries. We have ambient light, directional light, point light and spotlight. Lights have a color and an intensity and the color is an RGB string, case and sensitive. Intensity is, as we discussed in Washington at the, you know, using that value contract at the moment it will be a just a number between zero and one but we, I don't think we covered this in the spec. I don't think we've got to that bit yet about the values. But yeah, leaving room for later providing actual kind of units for intensity like lumens or whatever. But at the moment it's just a scale from one to one. Spotlight has an additional angle. We spent some time wondering whether it was the half angle or the full angle. And it seems to be different approaches in different libraries. So we've settled on it being that angle as you can see in the picture rather than the full angle. Plus five in degrees. Yeah, yeah, yeah, floating points. So if you want 45.3 degrees, that's fine. So, yeah, and just some, yeah, again, obviously the spec has to specify behavior if I put a light in a scene, but I don't point it at anything. It points downwards in the negative y direction. If I don't position it within the scene it's at the origin. And then we've just kind of leaving again as Triple Life is open for extension. We leave other potential properties of lights up to clients. And again, as with cameras, if the scene does not provide a light, though clients, the viewer can provide its own one, but how it does that is entirely up to the viewer. How the lights and cameras are positioned in the scene are then covered in painting annotations. So this is where when the full version of the spec emerges that the kind of ordering and introduction of concepts, we think we can do a better job than in the current spec about introducing painting annotations as the fundamental mechanism. So obviously, when you inspect like this, everything needs to come before everything else, but it has to come in a certain order. So this is the order it is at the moment. So, yeah, if you're familiar with annotations, we place stuff in our scene through painting annotations. Now, one of the things that we probably made us decide we need to call out a bit more is the use of selectors and specific resources, which are part of the W3C model, which are gonna be much, much more required for usage in 3D scenes than they are in 2D content. Jamie says the light, the scene must have a light. Here's a good point. I guess the clarification of that would be the scene must end up having a light. But yeah, yeah, there's some, there's some lots of, there's lots and lots of language finishing needs doing with this. This is what we basically wrote on Thursday and Friday. So, yeah, yeah. All resources that can be added to the scene have an implicit or explicit local coordinate space. So obviously lights and cameras are just conceptual things, whereas models might come with their own coordinate space. But we assume that there is an origin to be transformed. There's an origin of the thing to be placed in the scene that is gonna be transformed into the scene. And this is done by aligning the origins of the two coordinate spaces. Yeah, if you don't mind if I throw in a second. Yeah, explain this better than I will. Yeah. Well, I think the reason why we put that discussion between is we wanted to get readers thinking about this concept of that any resource to be put, most resources to be put into the scene have their own local coordinate space. Or in other words, it's really a way of getting people in the right frame of reference to think about things like transforms later on because we have to make points that say that a scale transform is applied to the local coordinate space of a model. Another way of saying that is if you are putting a 3D model into a space and you do a scale transform, the scale is applied from the origin of that model's coordinate space, not from necessarily the centroid of the model. So that if you have a model that's off the origin and you scale it by two, it's actually going to end up further away from the coordinate origin in addition to just growing twice the size. So it's just trying to get people thinking about this distinction between the local coordinate space of the resource and then the scene coordinate space. Absolutely. Okay. Yeah. So then the spec introduces this concept of a point selector, which is going to be really used very heavily in 3D. It's there in the presentation 3 spec already, but it's only going to be used for like your particular, especially time-based use cases at the moment, but with 3D it's, you know, it's how you point things at points or position things at points in the scene. Obviously they have X, Y, Z properties. They also have a time property called instant because it's a point, not an extensive time. We lost a few hours on the issue of what this was called because in the presentation 3 this is called T and I won't go into the details, but there are reasons why we can't call it T because it conflicts with the media fragments T, which we'll see in a second. So yeah, but we have a spatial position X, Y, Z and an optional temporal point instant in seconds. Here is an example of placing. Here is an annotation that places model 1.glb into the scene. So again, the target of the scene, the target of this annotation is the scene, the specific resource whose source is the scene and whose specific selector is a point selector which provides the coordinates. And here is one of the reasons why. So, you know, many people, you know, if you look at probably the vast, vast majority of trip live manifests out there today, they use this kind of syntax for positioning things. So, well, the vast majority just target the whole scene. So they would only have a target that looked like this. So not an object, but just a URI. That's the scene ID. And then if they do target a rectangular region, they use an X, Y with height syntax for canvases. And we want to echo that by allowing an X, Y, Z syntax or spaces that can just be a fragment on the end. And here is, yeah, that was where our conflict with the media fragment that Julia just mentioned is that T in the media fragments syntax, which would go here, represents an interval, not a point. So that's why we renamed the point selector instant. So here we go. Right. So, yeah, I mean, does that, like this is the kind of crucial thing. This is like trip live 3D 101 is the JSON you see on the screen there. This is how this is an annotation that places a model in a scene at a point. So, yeah, so obviously they're, you know, this is fine if we're in a kind of a universe where there's only one model and it's positioned in the scene and its scale or even isn't really going to affect the viewing experience. But as soon as we need to put more than one thing in the scene, then we need to move things around and scale them and rotate them and all that kind of stuff. So we have three classes of transform which can be applied. As the annotation positions things in the scene, once it's positioned, also before it's positioned in the scene, it can be, have these transformations applied and they are making it bigger or smaller, rotating it around whichever axis it needs to be rotated around and translating it, moving it in its local coordinate space and here we have an example. I'll just go straight into the example. So, yeah, this is the model, the astronaut or whatever it is before it's being placed in the scene. It's being flipped under the degrees and moved so that it's, you know, if it was at its origin, it's now one unit off of its origin before it is placed in the scene. It can be, have these transforms applied and then for convenience and again, echoing many constructs in existing libraries, we have a look at constructs. So rather than work out, you know, where something like a light or camera, where a light or camera should be facing, we can say, you know, work out where it should be facing by pointing at this thing. And again, a point selector is the property of this. Now, in this example, we're looking at a point selector. We're looking at a particular point in the coordinate space and in some of the example manifests, we'll see that the look at property can take two different types of resource. So you can either take a point selector, which provides the point directly, or even more conveniently, if you have already placed, say, the astronaut in the scene, the ID of the painting annotation that places the astronaut in the scene, you can look at that ID rather than kind of dig down and work out what point you want it to look at. And you might use that for, yeah, looking at models within the scene, but you also might use it for navigating around comments or descriptive annotations or something like that. You know, look at this one, look at this one, look at this one. So yeah, so yeah, the value of the look at property can be a point selector or another annotation that's already in the scene. And then they're kind of the idea of putting a scene inside a scene or another, a triple-eyed canvas within inside a scene. And this is probably an immensely powerful concept because it mirrors the kind of notion of grouping that you might have in a 3D editing environment or a 2D editing environment. You know, here are some resources and I want to group them and then move them all in or transform them or scale them together and to do that, you know, that's just kind of making a scene, a triple-eyed scene, and we're arranging it how you want and then bringing the whole scene in as a content resource into another scene. And the same transforms that apply on bringing a model into a scene can apply to bring another scene into a scene. So yeah, it works just like a group or a group thing on resources. So here, yeah, here we've got one scene that is being placed into this scene. Now, one thing, sorry, any questions before I go on to the next section? And nothing will apply to canvases as the idea is. Yeah, so this is the interesting news case about, you know, obviously, placing 2D content into 3D scenes. So the mechanism for doing that, and we decided that the spec, there's maybe a thing for discussion that should be challenged on, but we decided the spec should not, should assume that if you want to do this, whether the 2D content is a video or an image, or an image with an image service, whatever it might be, that the way to do that is not just do it directly, but to do it with a canvas, a triple-eyed canvas. So to put the 2D content in a triple-eyed canvas, and it may already be in a triple-eyed canvas, probably even likely to be, and then placing the canvas into the scene. This is, again, this is one to really test and kind of kick tires on and really make sure it works properly. And so, yeah, because we have a canvas which is unitless width and height, spatial extent. I want to place that rectangle into the 3D space at some point. So yeah, so when a canvas is painted, there's an annotation targeting the scene, the top left corner of the canvas is aligned with the 3D coordinate origin of the scene. Can I throw in real quick? Yeah, just because I think that is an interesting point where the discussion between the editors did diverge from, I think, some of the discussions we've had so far, so maybe it's worth mentioning that because we have in the spec the top left corner of the canvas is at the origin. And so, by default, it sort of extends down into the right so that the left edge is sort of going negative Y. And we just thought that we've, in the group, sometimes talked about the centroid, but I feel like among the editors we just felt that there are so many assumptions of the canvas that its origin is at the top left, that we are potentially creating headaches for ourselves down the line by trying to use the centroid in that 3D context. So... Yeah, you could position a canvas in the scene at a point selection. Now, obviously, the canvas provides a 2D coordinate system which is typically going to have widths and heights in the thousands. And the scene is going to typically have a coordinate system probably a bit smaller than that in terms of, you know, just units. So you're almost certainly going to scale your canvases when you place them. Now, the thing is, this way of doing it, it might be tricky for some use cases because, you know, effectively that canvas, that rectangle, you might want to put that rectangle on any kind of, you know, bounded plane in the scene. And doing that, using the other mechanism might be quite hard to work out. So you could place a canvas into the scene using a polygon Z selector which basically could paint it onto, you know, you could warp it or distort it and place it at any angle or, you know, orientation. And the, you know, kind of direction the canvas is facing in the scene is determined by the order of points. So, again, probably, Julie can explain that better than me, but over here, the... Yeah, because obviously a canvas has a front and you don't want to, like, paint the walls of your 3D art gallery with canvases facing the wall. Yeah. So, yeah. So this construct allows you to do that and maybe easier for, you know, certain use cases. Since we're raising the WKT polygon Z expression, we just wanted to be very explicit about the order that the points should be listed in and they're explicitly being listed in the clockwise winding order because it's like this is the only place in the spec so far where we really have to worry about the specific coordinates that make up a polygon, but, you know, at least if we are going with a right-handed coordinate space using a counterclockwise winding order should give us basically the front face of the canvas should be as we intend it to be, which in this case is, you know, this example is just drawing a simple canvas instead of sort of being below the X-axis, it's now above the X-axis with the left side on the positive Y-axis and it's front with that counterclockwise winding order should point towards Z-positive. So it's just, and since as long as if we're starting from the top left, then it's best to just have a solid assumption that you go around counterclockwise. So that way, if I wanted to do something crazy, like make an upside-down canvas that had sort of like twisting skew, I could still do that and know exactly which coordinates I should specify when. Does that make sense? It does and anywhere else jump in if it's not clear, but so from this polygon, for instance, polygon Z, the first of the quartet of numbers, the first of the set of three, presumably is where the origin of the canvas would be placed in the canvas example, which is top left. And then from there, clockwise, as you're saying, the second set would be top right of the canvas. Counterclockwise. Oh, I thought you said clockwise. Yeah, so right-handed coordinate systems, and this is also what 3JS uses, but it's also what probably most things that use a right-handed coordinate system use. So top left, bottom left, bottom right, top right, which... Okay, counterclockwise. I mis-heard you earlier. I mis-understood. That's helpful. So just as a way of visualizing that, I think we're all familiar with Ed's infinite canvas demo. Yeah. We've actually shown that many times. So you could construct that by placing all those canvases in the scene using these polygon Z selectors. But in... So what is the behavior? They clearly have a front and back. And if you go around the back, you just see black. So now that's why we are retrospectively applying background color as a property to a 2D canvas so that you could actually make... If we construct a manifest for the infinite canvas demo, which I think we should have as one of our demos, you would explicitly give these incoming 2D canvases a black background color. And if you didn't give them a background color, they're assumed to be transparent. So you could just see like the kind of mirror image as if it was on glass. As if the painting was... You could just look through the back of the painting. And that might be a completely valid thing that you want to do. So you would not give it a background color in that sense in that case. But in many normal cases, you would want to give it a background color. So if you go around the back of the canvas, you don't see a mirror image. You see whatever color you've applied to the 2D canvas. Okay. Yeah. Makes sense. Sorry. So yeah, so I think... So those are the mechanisms for placing things in scenes. We did have a kind of... Yeah, this... So in all this interoperability and assembling of things, people are going to be combining models that may have their own cameras or lights. And by defaults, the behavior of a viewer should be that it just makes those cameras and lights available to the scene. But that might just cause chaos and not be what you want. So you can have this exclude property that basically tells the viewer not to expose or expose or expose the effects of various things that might be coming into the scene from models bought into the scene. And they are lights and cameras, obviously, but also audio and any animations that might be present in a model. So if the astronaut is spinning around or something, you might want to say, well, I don't do that because I don't want my statue garden to have spinning statues. I want to stay still or whatever. And similarly, if you're bringing in lots of audio sources or scenes that contain audio, you might want to just turn that off. Now, the implementability of this is one thing, but this is probably necessary for defining a scene. It's got three minutes left. Just as canvases have a duration property in seconds, so can scenes. And this is where our T comes into it. So as a really verbose way of saying it, we needed a point selector, but not with an instant. If we want this model to appear in the scene for 50 seconds from 45 to 95, we need to do it in the verbose mechanism like this. But we could do that in a more short mechanism like this. And this is where that T is a time span or extent rather than an instant because this T is borrowed from the media fragment specification. So this is a more concise way of doing that and more in keeping with the concise way for 2D scenes. And yet, but if we wanted to place something such as a comment or something like that, an instant, then that's the instant property of the point selector. And that's about the end of the kind of written up spec, and then there's some lots of property definitions that need to be assigned to various things, scenes and cameras and lights in spec. But that's where we got to. And also, just to point out that, where is it? This here contains a mixture of completes and in progress and not at all started demo manifest that show these things. And as Julie said right at the beginning, there's like a big warning sign on these. Yeah, these are very much work in progress, but some of them will just contain errors basically. But they are the beginnings of our cookbook recipes, test fixtures, demo examples, whatever. Very good. I will stop sharing now because we're nearly at time. Yeah, thank you. And if I could add one last process thing. So in the running notes, and this is sort of an action item for everybody. So we have all of those changes from the Ed's branch wrapped up in a PR and the pull request on the GitHub for a triple F3D. And that gives everybody, I mean, it's weak. We can talk through this next week, but for all of the corrections or, you know, assumptions that someone wants to question or things to add comments about, you can that the PR makes a great place for those. You can either post general comments about the whole thing or you can go to the changed files and you can actually add comments questions or suggestions on a file by file and line by line basis. So if you want to go to a specific place in the spec document and ask a question, the PR is great for that. So that's something awesome if people have, you know, for people to do between now and our next meeting is to use that PR to raise questions and thoughts. We spent maybe the first half of the first day generating a whole bunch of new manifests. And as part of that, we also reorganize those manifests a little bit just so that people go look at those manifests, they may be organized slightly differently than they were before. Those manifests, it's important to say, are very much a first draft. There may be, you know, issues or things that were missed in them. And some of those manifests, especially some of the ones that are like ZZ underscore, some of those are placeholders. They have like a description saying what the manifest should do, but there's not really content in the manifest yet. And, you know, after we review all the draft spec stuff in future weeks, if anyone on the TSG wants to help, you know, sketching out what those manifests should look like, that's totally something that would be great to have someone help with. But so there's been a bit of manifest reorganization also right before the DC working meeting. We put up some, a guidelines document for creating demos basically for creating experiments that implement these like triple IF manifest recipes, it lays out acceptance criteria, it lays out the things that the demos must do, what they should do and some things that they could do as well. And so that's there for technical people who are interested in getting involved in creating demos or implementations. That's like a great document to check out. I will say the list of milestone functionalities is probably already inaccurate, incorrect. I would suggest going and looking at the manifests, you know, and they're numbered roughly in order of complexity. And the third, something that I just sort of bashed together in between moments over last week, and it is linked in the in the running notes is I've added a code sandbox 3JS implementation of our most basic manifest. And I want to say it's, you know, for the developers in the room it's almost trivial, it's literally it's loading our most basic manifest, it just takes in the astronaut, puts the astronaut at the origin of a scene. So this is the origin of the scene, the origin of the astronaut is at its feet. But it is in code sandbox, it's using 3JS and it is loading a manifest from GitHub. It doesn't have any local files or anything like that. Here, I'll just pull up the manifest. So it's pulling that directly from GitHub, which is then pulling the astronaut model also from GitHub, and it is reading that manifest. So. Cool. Straightforward, but, you know, this this does it fulfills all the the musts, and a couple of shoulds, because we do have in the should that ideally it should be on code sandbox. And ideally it should load everything from from GitHub rather than using local files. So that's linked in one of in the appropriate issue on triple I 3D and it's in the running notes. So, you know, if anyone likes working with 3JS, please feel free to take this and run with it and tear it apart and make it better. I'm planning on making another version of this that includes more of the shoulds. Yeah, that's that's where we are with that.