 Good morning. My name is Alessandro D'Omparelli. I'm a computational designer, it's not going on. Okay, yes. So I started building engineer and architecture, but as you can see my work are not really architectural. This is because I started to experiment in many different directions in order to explore the opportunities of computational design, digital tools, programming in order to combine it with 3D printing technology and explore new territories for product development and fashion product, medical devices. And I think that for a computational designer there are many reasons for moving and start using Blender, at least as an additional tool, together with other tools that we use usually. And one reason is of course mesh modeling. Blender is really good for mesh modeling. You compare to other CAD software, it can be really fast, it's really flexible. You have a very powerful creative shoot at your disposal. Modifiers allows to create non-destructive workflow. Usually a computational designer is really lazy, so usually start to find some easy solution in order to achieve a maximum complexity. And with modifier, you can keep your starting model really simple and then add all the rules that you want to use and make something really complex out of that. And another reason are the vertex group. Because in architecture, in medical devices, in fashion, everywhere, it's interesting to have a possibility to modulate some effect according to some weight map. So having some information that allows you to calibrate the effect of what you're doing, for example, with modifiers. And another reason to use Blender is tissue. Tissue is an addon that is started to develop for Blender. And initially it was just like a bunch of scripts that start to develop in order to move as much as possible my workflow to Blender. So I needed some feature. I saw that those features wasn't in Blender at the time, so I said, okay, I can do that. And I can share to the community so maybe other people can do the same. And at the end, we have a tool that works fine for computational design as well. And one of the very first scripts that I did was the dual mesh. So imagine that you have a very bad topology and you have a very bad idea of adding a subdivision surface on top of that. What you see is this kind of Voronoi pattern that comes out. And with some tricks, you can isolate some part of the object. You can select similar. You can invert selection, dissolve some things, and get the polygonal pattern out of that. So I say, okay, just make one script, okay, in one click, do the same thing. And for instance, this is how I did the tissue cover. So it's just an icosphere with a bunch of modifiers like a decimator or displays where text weight added. Then the dual mesh that out of this triangular pattern generate the polygonal pattern. And then other modifiers like edge split in order to separate all the faces, a little bit of smooth in order to make them a bit smaller. And then a wireframe that makes the donuts out of every polygon. But probably the most complex feature that one that took more time to develop is the tessellation. So this is a very simple example of how the tessellation work. You can literally open blender and do those passages using tissue. So as you can see, you select one object. You select another object and basically create a copy of the first object for every face of the other one, adapting to the change of the size. And you have the freedom to change the source object and get the updated result. So it's important to work with non-destructive workflow. I don't want to redo the things again and again. I don't want to do manually. So because I'm really lazy, I always try to find the way to make things simple. Just few clicks, just focus on the different pieces and then assemble them when I'm ready to go further. There is also a possibility to animate. So you can animate the source object and get the animated tessellation or animate the parameter that controls the tessellation as well. And unfortunately for the animation, there is still some bug that has to be fixed. But if you use the render animation in the viewport, sorry, you can do animation also using the tessellation. So for instance, with 3D printing, usually you need a very nice continuous surface. So if your component allows the connectivity with the knee-borne components, you can create one non-manifold surface with some solidify. You get the 3D printable object. And usually do that thing in other software takes a lot of time. So the idea is really simple. You have a component, the purple one, and you create a copy of this component for all the faces of the other mesh. Of course, it's adapted to the sides and the format according to the normals. And you have some settings at the beginning. If you just press OK, you can still change those settings in the object data panel. So you have a bunch of settings that you can use to control all the parameter for the tessellation. And according to the type of mesh that you have, you can use different strategies. The simple one, the one that you can find also easily in other software, for example, is the quad mode. It considers just the face and adapts the component to that face. But sometimes you want to make a smoother component. You want to take advantage of the subdivision surface in order to make them following nicely the surface. So using patch mode, you can do that. So it will keep the topology before the subdivision surface or the last multi-resolution modifier. And it keeps that for defining the sides of the component. Then you add a subdivision surface, and these information are used not for increasing the number of components, but is used for making them smoother and following the surface. And in this way, for example, you can see how you can change the number of repetition inside the component in order to have different patterns on the surface. So the component is the same just with some arrays in order to increase the multiplication, and that's it. But it's more tricky to create a rule for adapt a component to a generic polygon. If you have just four sides, it's easy, just remapping the thing. But if you have a generic polygon, I started to think about the most simple way to do that, and what I came out was to use like a poke faces on each polygon, and then adapt the component to each individual triangle. And doing that, you can achieve very complex geometry using very simple components. And at some point, it's tricky to understand what you have to do in order to achieve a certain effect, but it's also interesting to be surprised of what's came out just from very simple geometries. And similar, but a bit different, is the frame mode. Frame mode is like a customizable version of the wireframe modifier. So you have almost the same settings of the wireframe modifier, but you can define the aspect of your wireframe basically. So you create a component that is multiplied internally to each polygon, and then it's also multiplied along the boundary. And for all those modes, you can, of course, use some vertex group in order to create a morphing effect. So in order to program a morphing behavior, I decided to use a feature that was already in Blender. It's the shape key. And using shape keys, you can define different deformation of your component, and you can combine them with a vertex group in order to have gradient and variation. And basically, it makes the match just using the same name. So if the component have some shape keys that are A, B, C, D, and you have the same name for the vertex group of a base mesh, it automatically combines them. So you can have 10 different morphing behavior according to 10 different weight patterns. And, for example, I used tissue and those features for fashion design. So this is a collaboration with a fashion designer in Italy. And for rigid objects, this is really printed in metal. You can easily create a very complex geometry that you can produce only using 3D printing. But, for example, if you think about fashion, if you want to make garments, you can also make those pattern flat and print directly on fabric. So you just put the fabric inside your printer. You start printing those patterns that you are making with tissue on top of the fabric, and at the end, you have something flexible that you can also wear. More complex is, for example, this project about a medical device. And the tricky part here was that, okay, I needed a tasselation because I wanted to make holes all along the surface. But I also need different components in the different part of the object. So I say, okay, how can I embed this feature? I say, okay, let's do the same thing with matching naming. And let's use some materials on some faces and use some object that have the same name. So on my base surface, I have different materials like A, B, C, and so on. And I have objects that have the same name. So when I do the tasselation, automatically take one component and put in the correct face. And doing that, I was able to use three different components for the starting part of the tutor of the device. And one component that is morphing, changing the sides for all the holes. And one other component that defined the seam, the opening, that allows to open the two shelves. And in that way, with one click, I was able to generate the tutor, just the device, just working with all the pieces on one plate and then assemble them in one click. This is a workshop that we did recently in Beirut. And the idea was to mix together fashion and medical devices, creating some hypothetical futuristic medical devices. This was a four-day workshop for people that didn't knew Blender before. I think they didn't knew machine modeling in general before. And we took two days for training them in order to teach how to mesh model, how to use tissue. And we took two days for designing. And as you can see, the models have some issues at some point, but it's interesting that they reached that result just in a four-day workshop. I always like to use tissue during the workshop because for me, is a reason to get, a way to get feedbacks. So I can see what's part of the, of the add-on is not working, what part is more difficult to understand. I can understand how to fix some problem. Usually some bugs came out during the workshop. So I have to fix them during the night without sleeping. And the day after, usually they are fixed. So it's a stress test for the add-on. And I really like that. Probably you are familiar with different strategies, for example, to apply texture. So you know that you can, for example, use coordinates that are adapted to the sides of the object and aligned to the object. So usually by default, the tessellation consider the boundary sides of the component and automatically adapt that to all the faces. But you can also use coordinate according to the local coordinates. So you can consider the position of the origin or you can keep it independent using the global coordinate. And the nice thing is that in this case, the way why, how the component is mapped on the faces is independent to the rotation and the scale. So this allows, for example, to just move, rotate and scale the component and automatically the tessellation created these weird effects. And you don't need to program. You just need to click, click, and that's it. Or, for instance, in this case, I use a component with a couple of mirrors in order to allow the repeatability in the two directions. And then I use another option that is clipping. So everything that goes out to the boundary is cut away and this allows the connectivity between the pieces. So this object that is have a very unpredictable topology actually is 3D printable because there is continuity between all the pieces. Of course, I'm using the solidify at the end that make the trick. But more interesting is the cyclic coordinate options. You can simplify, say, that is the Pacman effect. Basically, if some part of the object goes out to the boundary, you trim them and you move on the other side in order to have a continuous interlocking behavior between all the components. And this is a feature that I developed oppositely for this workshop in Dubai. This was the visiting school of the architectural association this year. And the idea was to working with additive stereotomies. So studying behavior of components that are 3D printed but must collaborate with a specific structural behavior in order to make a self-standing pavilion. So with my unit that we call the blender unit, we started to use blender for all the aspects of the project. And we start to use the closed simulation in order to study the catenary system in order to make this vault that was structurally efficient. And after defining and studying the general shape, we focused on defining some components. Those components should have some feature like interlocking behavior in order to make them structurally meaningful. And we print them. We're using a lot of printers in order to test the physical behavior of those components. When you have to fabricate the things, you cannot cheat. You have to test them. And if they don't work, you have to change something. And at the end, they assemble this model, this size, without the use of any glue. So they just use some strings all along the loops. And everything just work because of the interlocking interface between all the pieces. Also, because computational designers and me in particular are lazy, I try to find the simplest way to get the maximum complexity out of the simple component possible. So in this case, the component is a plane. I just took two vertices and I lift them a little bit up. The base object is a triangle. And if you use the fan tasselation, what you get is a kind of pyramid. You have all the planes that converge to the center. So if you take the same component and do the same tasselation on top of that pyramid, what you get is the second iteration. And doing again the same thing, you can increase the number of iterations creating basically a kind of fractal. So I decide to put just a repeat parameter in order to automatically propagate the effect of the tasselation in order to see what happened after some number of iteration. And it's nice because you can really get unpredictable result with just small changes in the component. And things are even more complex if you combine that with a multi-component feature. So this is a bit complicated. So the component A, the X, go on the green material. The component B, the one with the stripes, go to the white. But after the first iteration, because the component B has both material, he can trigger the generation of both components. So you see how increasing the iteration, you can get this pattern that you cannot really predict at the beginning. So you have just to iterate all of them and see what happened. And just working with very simple component, you get something that is extremely complex at the end. And yeah, it's something that I'm still experimenting with. So it's interesting to play just changing how you assign the materials to the component B, completely change the effect that you get at the end. Or even more complex. If you, for example, use as a component a branching like this, and you have different materials on the end part of each branch, those can generate different branches using iteration. In this case, the difference is that there is an unused option that allows to keep all the phases that doesn't generate the branches. So instead of replacing the tasselation like we did before, you are making it grow. So for example, you can make a component that have a kind of flower, I don't know. And you can combine them at some point with those rules, you can change completely the final result. And if you combine, for example, the this feature with the shape keys of a single branch, you can get this animated broccoli. Weight tools, because I really love vertex group and weight paint and they are really useful for computational designers, I decided to add some feature to them in order to use in my workflow. So some of them are really simple, like the size of the face is the curvature that is actually based on the dirty vertex group feature, but some other are a bit more complex, like for example, the weight distance that doesn't consider the distance from some vertices considering a straight line, but compute the distance following the curvature of the surface. Why we should do that, we will see that later. Or weight formula, sometimes you need some mathematical formulas to apply to your vertex group, we weight formula, you can do that. You have some variables that you can use, for example, the x, y and z coordinate. You can use the normal coordinate in each vertex. You can use as variables other vertex group, so you can use for process other vertex group. Or you can use some keywords in order to insert some sliders and play with the numbers in order to see how each variable changed the effect. Or for example, you can make the weight harmonic, just is just an harmonic function of the weight vertex group, and the effect is similar to the wood texture, but is controlled by a source vertex group. Or you can convert to color if you want to render them, or if you want, for example, to export this information to another software, because the vertex group you cannot export it, but if you convert to vertex color, you can export and use those information also in other softwares. Or because we work with cloth behavior in order to study the catenary system, I wanted to have some information about the physical behavior of the cloth, so I try to read the information of the deformation making some weight map. My favorite are the contour tools, so if you have a vertex group, you can generate on the surface all the curves that follow the same value. So you can define a starting value, an end value, and in the middle you can put all the curves that you want. Or for example, if you have ever tried to make a display pattern with a very sharp element, but your topology is aligned in a completely different way, you need to increase the number of subdivisions a lot in order to have this sharpness in your displays. So with contour displays it reads the vertex group and adds some edge following the direction of the vertex group. So this allows to keep a lower value for a lower number of faces, but having more sharpness in your display map. And yes, you see a couple of examples. Or the same thing with a mask. So instead of just deleting the vertices like the modifier mask that we have in Blender, you can cut the mesh and remove half of that in order to have this continuous and nice cut along the geometry. Or this is an experiment that I did and I tried to implement the reaction diffusion simulation using the vertex group. So the reaction diffusion is based on the theory of Alan Turing about the generation of a pattern on animal skin. So he understood at the time that there was a chemical reaction behind the pattern that you see on fishes, animals, and everything. And basically there are two different substances, one is called A and the other one B, a lot of fantasy. And you can make them diffuse along the surface and make them react when we are in contact with each other. So you can just paint and see how the reaction works in real time. In order to make it not super slow, I had to use Namba. I don't know if you are familiar with Python and the Namba library. It was the fastest way that I was able to implement using Anadon. Probably hard coding in the source code of Blender could make much more sense. But yeah, it was the easiest way that I found at the time. And changing the parameter, you can change the effect of the pattern and you can see different possible patterns, some spot patterns, some labyrinthic pattern, some more unstable and unpredictable patterns. Like this one, for example, that is constantly changing. And you can combine that of course with modifiers. So for example, if we had just a displaced modifier, yep, and you can in real time start painting and see what happened. Okay, if you are tripophobic, close your eyes. If you are scared about holes and small things, because this is an implementation of the reaction diffusion using just some modifiers. So the edge split, the smooth assigned to the modifiers, the displacer, and the wireframe. G-code exporter. This is the last piece. I'm not sure if I'm going to keep inside tissue or maybe I will make another one, because it's more technical related to 3D printing. Indeed, the G-code is all the blah blah blah that you usually put in the printer in order to print something. And this code is a very technical definition of the G-code. It contains all the information of the toolpath in order to see how the printer should move in order to fabricate a certain object. The workflow usually is, I create a model, a watertight model. Suzanne is not watertight, so you have to make watertight. Slicing using a slicing software, like Kura or other software. This software generates the toolpath and exports the G-code for you. What I wanted to do is I want to use Blender just to draw the toolpath that I want. And in one click I want to generate the G-code that I want to put in the printer. So why we should do that is not more complicated to get something nice just working with the curves rather than making just a 3D model. Yes, of course, but sometime with some technology it's interesting to actually play with a path. Because if you make something weird sometime you get some effect, some error, but if you understand how to control them you can use them as a feature in your design. So the idea is avoiding the 3D modeling of the surface of the final object and just work on the path that you are using. So together with Bruno de Masi, one of my collaborators, we made those vases using this approach to 3D printing. In this case we use Zwerciok. I think you probably know Zwerciok. It's an addon that allows to create mathematical rules using a composition of nodes. But now we are experimenting a different workflow just using tissue. And in this case you have a slicing behavior that is the one that you usually have that have all the constant offset along the Z direction. But as you can see it gives you some gaps, some openings when the surface are horizontal. So if you use the weighted distance you can, for example, create this offset behavior that follows the shape of your surface. And with a feature that we added you can combine that and create also this movement of a toolpath in order to have the behavior that we have seen in the vases for example. We are now experimenting with some architectural modules in order to make the slicing behavior more adapted to the particular shape that we are making. And we are still printing some of them. So this wall is kind of looks like tissue. I mean it's a work in progress, continuous work in progress, many pieces that have to be connected together. And the plan for the future are keep it stable. It's not easy. It's time that a lot of bugs came out so I have to fix all of them during the night as usual. Make it fast is not easy because some geometry are really complicated and working with Python requires some tricks in order to make things efficient. Keep it updated. So I have always to update according to what's changed in Blender but also there are many features that I really wanted to put inside tissue. And of course keep it open. Thank you.