 All right, thanks so much for coming and thanks so much to the Blender Foundation and everyone who's been putting this together. It's been amazing to see all the talks today. My name is Kevin Kress and I'm presenting some of the work of the Living Architecture Systems Group and I wanted to give a little bit of a thank you to some of my other collaborators who couldn't be here before I start the presentation. So Phillip Beasley, Timothy Boll and Matt Gorbit who have been sort of really influential in the work I'm about to describe. So this talk is titled, An Open Software Ecosystem for Designing Living Architecture. And before I get into the talk, I just want to talk a little bit about what the Living Architecture Systems Group is and what we mean when we say Living Architecture. So our group is an international partnership of researchers, artists and industrial collaborators. And we're interested in studying how we can build sustainable, adaptive environments that can move, respond and learn and that are inclusive and empathetic towards their inhabitants. So we try out a bunch of different fabrication methods for making lightweight structures, modeling methods, behavioral systems for these Living Architecture testbeds that are produced. And so these testbeds are sort of large canopies, they're sort of cultural, sculptural spaces that respond to visitors as they move through them and interact with them. And they have a variety of software tools that help drive their behavior and contribute to their production and interaction. And so a lot of this work can be kind of summed up in a couple of simple diagrams. So typically for architecture we look at sort of these static boundaries, this sort of delineation between inside and outside, and you have this sort of static structure that delineates between the two. And at the Living Architecture Systems Group we're really interested in exploring how those boundaries can expand and start to become porous, can start to allow energy flows between the interior and the exterior, and how these environments can become a little more open, a little more interactive. And so a bit about myself, when I started using Blender I think I fit very much in this sort of static box method. It was sort of my tool that I love to use for absolutely everything. If I was doing modeling, if I was doing renderings, if I was doing photo editing, I just defaulted to Blender because it was this fantastic tool that had everything I could possibly need in it, except for one thing. So I gave a talk back in 2019 about adding some architectural dimensioning and drawing tools to Blender because that was sort of the one thing that I couldn't do with it, and I wanted to be able to do everything inside this tool. But as I started working with the Living Architecture Systems Group I found that when you're working in an interdisciplinary practice, especially when you're doing shorter term collaborations, you can't just sort of, everybody's got their software tool that they know really well and they like to use, so you can't just kind of expect everyone to come and work in your platform. You've got to start to open those boundaries, and Blender, being the phenomenal tool that it is, makes it really easy and really simple for us to start to expand those things and build out. So I want to talk about sort of our history of Blender and how we've been using it in the studio by talking about four of the Living Architecture Systems Group testbeds, and these are these sort of immersive environments that are designed by architect and artist Philip Easley. So we've got two, one in Meander, which is in Cambridge in Canada. Arfa Reef, which is in France. Grove, which was an exhibition for the Venice Biennale. And Poetic Vale, which is in Tilburg in the Netherlands. So Meander's the first project, and I came in to this project about partway through. It's this sort of large central grotto made up of a couple of spheres and this sort of large expanse of canopy that's over an event space. And the role of Blender in this project, I'll start the video there, was really as a visualization tool sort of after the fact. Part of Living Architecture Systems Groups really mandates is to not just make these spaces, make these testbeds, but to communicate how they work, how they function, and what's going on inside them so that they're not just these sort of black box spaces, so that we can really communicate to people so that they understand how they work. So we produced these series of animated visualizations in Blender and these run sort of on some display boards in the space to kind of communicate what each of the sensors and actuators are doing and how they're responding to people's engagement. And we found we were able to create sort of a fairly high-fidelity model of the sculpture of the testbed using Blender. So for the next project for Arthur Reef in La Fru in France, we decided to use Blender more for the schematic design phase of the project. So we started out modeling the sort of canopies of this space. We used sort of a sculpted plane to define sort of the overall gesture and then some sort of simple hex grid geometries with particle systems driven using weight painting so that we could really sort of gesturally define what the movements of these different components would be throughout the space and how they would look. So these are just a couple of images of some sort of previous renderings that we did. But what we found was that translating that sort of very sculptural, very gestural Blender model into sort of the technical documentation that the team needed to actually construct the testbed provided some challenges. You know, if we exported via FBX or OBJ or any of these sort of more standard file formats, a lot of the hierarchical information that was really key to the design of the sculpture got sort of crunched down and made these sort of very difficult to work with files that were challenging for the team to take into, you know, Rhino or Fusion 360 to really build out the industrial design of the sculpture. So that led us to produce what we're calling the living architecture systems description. It's a custom exchange format that we've been working on that's based in JSON and it's this hierarchical model description of the sculpture that talks about the device design, the spatial design, the behavioral design and the sort of lexicon of components and assemblies that make up each of these testbeds. And what that description allows us to do is because it's very lightweight, we can use it to derive different models in different software platforms, whether that's, you know, in Rhino for documentation or in Blender for visualization or, you know, in Unreal or Unity or Godot for sort of more interactive applications or in our custom behavioral modeling tools. So a quick example of how that system sort of works. This is one of the sort of spheres that you can see in some of the testbeds in Meander and others and it looks quite complex. You know, there's a lot of components, a lot going on, but if you start to break that down, you realize it's a lot of repeated components. So the underlying geometry that defines these spheres is one of the Archimedean polyhedra. It's a truncated icosahedron, which I think lots of people are familiar with because it's the basic form of a soccer ball or I guess I should say a football. And so it's a pentagon surrounded by hexagons. And so each of those hexagons is what we call a sphere unit and then we can break that down even further. Each of those sphere units is comprised of multiples of these spars that get tied together and each of those is comprised of a subset of components. And so in Blender, we represent this through collection and collection instances and sort of hierarchies of those. And in Rhino, that same sort of assembly can be represented through blocks and block instances. And once we got this working, it allowed us to sort of very quickly round trip between Blender and Rhino. And so you can see, and if you attach a unique ID to these things as well, it allows you to actually update models sort of on the fly between the two softwares, which has been fantastic for our working methods. And so that kind of brings us, and once we realized we could do that with the geometry data, with the spatial data, we realized we could also embed some of the behavioral systems and information for those as well into this data representation. So this is a view of one of the behavioral control systems. It has to be spatialized so that these lights and actuators can respond spatially to different sort of energy flows modeled in the environment. And we can use the same LSD file to inform those systems and give them their spatial layout. This brings us to the next project, which was Grove. Grove was part of the Venice Biennale in 2021. Unfortunately, because of COVID, we had some restrictions on how we could access the site and how we could actually do the construction. So it's this sort of large canopy. There's an extended array of speakers throughout the space with a wonderful spatialized sound composition done by Salvador Breed and a film projected down into the center done by Warren DePerez and Nick Thornton-Jones. And we used Blender quite a bit for previs so that we could make sure we had a really strong understanding of how this was going to look in the space. So both sort of wireframe previs and rendered previs as well. And one thing that was really, really interesting about this process was usually we do lighting design sort of onsite for projects. And it's very sort of fluid. Philip's a fantastic lighting designer. But for this, because we couldn't be in the space, we had to do it all remotely. And all of these lighting cues had to be synced with the film that was being projected in the center and that was being done in Touch Designer. And we needed a way to visualize how that was going to look in the space. So we were able to connect Touch Designer and Blender together using OSC messaging so that we could visualize in real time how these Touch Designer patches that were driving the DMX light controls would appear in the space. And we used sort of Eevee's real-time rendering to drive that. Another portion of this project was the Grove Cattle film that we collaborated on with Warren DePerez and Nick Thornton-Jones. And we were able to use the LISD to send this sort of very detailed model of one of the sculptural testbeds that we produced in Blender. And we were able to convert that into a set of instance blueprints in Unreal Engine as well using this exchange format. And this is a sort of short excerpt that I'll play of the film. And I'll play a longer piece at the end if there's time. And so the film sort of shows this, one of these testbeds, sort of through cycles of life, death and rebirth and how they might evolve over time. The final project that I'll talk about is Poetic Veil, which is at the Textile Museum in Tilburg. It's just about two hours from here. We just finished it a couple of weeks ago. So I've been in the Netherlands quite a bit this last month. And again, sort of really lovely, very dense canopy within this room. This was some of the previs that we'd done in Blender, a couple of renderings mixed with a bit of Photoshop work, bringing in images of previous testbeds. And it's just a quick fly-through of our Blender model. The system was really using the LISD quite a bit to allow us to move back and forth between Blender and Rhino. We worked with one of our designers, Adrian Chu, who worked on this sort of central garland system that sort of arches over another exhibition of a dress by Iris Van Herpen. And this allowed us to sort of work quite freely back and forth between Blender and Rhino. We also engaged with some custom tools that we've been developing in the creation of this sculpture. So a system that we're calling the LAPL, which is being built in Gido engine, which allows us to very quickly sort of sketch out some of these crenellated forms and these shapes and then use sort of a mass spring simulation to start to regularize those polygons and see what the curvature is going to be like of those shapes. And then again, using a sort of JSON export system, we can export that data out just to a very lightweight message description and bring it back into Blender for a final population. And you can see it comes in just as the sort of triangulated spring net. And then we filled that. And then we've been using the tissue add-on quite a bit to sort of regenerate the dual mesh of that to give us our pentagonal and hexagonal grids back for sort of further instancing and work. And there's just a couple more images of the test bed. And there's some quite evocative shadow play. So another aspect of the work that is sort of still in development at Tilburg is a project that we're calling Living Shadows, which is sort of working between Godot Engine and Blender to create these sort of fictional animated shadows that exist in the same space as the real physical test bed. So you've got the actual object and then its shadows on the wall are being augmented through projection mapping. And we're able to do this because we have such high fidelity models that we're developing in the schematic design phase in Blender. So we have these really detailed digital twins of the actual physical components that allow us to do this work. So we can create this sort of digital double in this virtual world. And we're doing a lot of the animation of the sort of life cycles of these creatures, these virtual creatures in Blender. And it also allows us to do some sort of targeted lighting animation as well that's generated through the projection on some of the glass vessels within the objects. And because all of these entities, these virtual entities are sharing sort of a shared world, our sort of lead behavior designer, Matt Gorbitt, calls it a spatialized digital milieu. These things, these sort of virtual creatures can interact with the actual physical components. So they can be sort of scared away by somebody interacting with one of the physical components or they can fly close to it and cause them to light up and respond. That's just about the end of the talk. So thanks so much for having me. Blender has been tremendously helpful in sort of the work of the systems group in really working on these types of interactive environments that can just prompt us to have a bit of wonder and a bit of speculation about how spaces might interact with us and respond to us. If you'd like to know more about the work that we've been doing, you can find us at lasg.ca. We try and publish a lot of the work that we do. So we have a series of folios, sort of short PDF documents on each of these projects. And there's quite a lot more images and content available on each of them if you're interested. And if there is a bit of time, I just like to leave a longer excerpt of the film Cradle playing just to end things off. So thank you very much.