 So my name is David Benjamin, I'm an assistant professor in the architecture program in G-South, and also director of the advanced studios. I want to welcome you to architecture and artificial intelligence, the first in a series of events that came to explore how the advance of artificial intelligence throughout our contemporary world may impact the built environment and also vice versa. Of course, you probably already know artificial intelligence is much typed and overused, and it means many different things to many people. Many accounts of artificial intelligence may start with the Turing test in 1950 and go on to chart the rise of machines advancing in some of the games against humans. So these kind of accounts describe the epic battle in 1997 shown here between the world's best human chess player, Gary Kasparov, and the world's best computer, Agnes Deep Blue. In the first game of the match, Deep Blue made a highly unusual play, sacrificing a rook well ahead, which seemed to hint at a sophisticated strategy. Kasparov was rattled. The game ended in a draw, but Deep Blue went on to win the match in chess world, found this devastating, and years later, one of the inventors of Deep Blue revealed that the fateful play had been due, actually, to a software bug. In the end, the computer won not because of an innovative strategy, but because the human was prone to worry and doubt and self-destruction. So Kasparov had assumed that machine intelligence worked like human intelligence, and therefore the unusual move must have been a sign of superiority, but actually the computer had a different intelligence altogether. More recently, another computer, Google's DeepMind, defeated another human, Lisa Dahl, in the game Go, which was once considered a game for uniquely human intelligence. It was thought that Go was impossible for a machine to win due to the nearly infinite number of outcomes and the difficulty of calculating which player is leading at any given moment, the difficulty of creating a metric. The victory of DeepMind may signal at the age of computers being able to solve specifically human problems, Google's computer can use big data and machine learning, and the potential of these technologies for all kinds of applications is stunning. But these techniques involve some results which, in some cases, can be troubling. As with all technologies, machine learning involves assumptions and biases, but the assumptions of machine learning would make even more troubling than other assumptions because sometimes they are hidden even from their own inventors. This concept has been articulated in recent writing by people like Kathy O'Neill and Kate Crawford, who have shown how the biases of these algorithms can lead to things like radical profiling and policing, sexism and job listings, and uneven distribution of resources in urban neighborhoods. And these arguments imply that understanding algorithms requires understanding the humans who create them, the humans who are also, in some cases, displaced by them, and also the humans who are affected by their conclusions. Perhaps the battles of chess and go and the growth of machine intelligence that they represent suggest that it is important for all of us, I would say as architects, but also as citizens, to become more fluent in algorithms. It's important to understand what's going on under the hood, including the bugs that these algorithms contain, the data they are based on, and the rules that lead to their conclusions. This is crucial not just to be able to use the algorithms effectively, but also to be able to guide, temper, and respond to their beliefs. In other words, A.I. is a political issue as well as a technical issue. Boxcar 2D. In other words, you don't actually do anything other than said. So to kind of wrap up here, in the context of these ideas, I think it's relevant in this kind of session for us to address what does this mean for education and for architecture. So one of the provocations that I've been thinking about a lot over the past couple of years comes from the neurologist Richard Burton, who describes a cognitive dissidence that occurs as machines become smarter and smarter. But Burton compares human intelligence to artificial intelligence and then goes on to argue that the ultimate value added of human thought relying our ability to contemplate the non-quantifiable. Machines cannot and will not be able to tell us the best immigration policies, whether or not to perceive the gene therapy, or whether or not gun control is in our best interest. In other words, since machines cannot worry, since worrying doubt are productive in creating humanistic, fair solutions to the problems of our time, humans will never be replaced by machines. But then Burton goes on to argue that in this context, what we need in education is not necessarily to get better and better at programming machines, but instead to develop the opposite, to cultivate the cognitive skills that won't be replaced by machines, to reinvest in the humanities, and to save the literary novel from extinction. So it is in this context that we've organized today's event. It's meant to be open-ended, to cover a range of work going on in AI from architecture and from other fields that may influence and even redefine our field in the coming years. Some of the questions we may explore include what kind of assumptions and bias might accompany machine learning for the physical world, what specific roles and jobs in the design and construction industries might be replaced by automation, and what new forms of literacy and criticality are necessary in architectural education and practice. We'll have three speakers today, and I will introduce them very briefly now up front, and they will then present each with a short presentation followed by hopefully a good amount of time for audience questions and for reflection and discussion. The first speaker is Julie Dorsey. She is a Yale professor of computer science and the founder and chief scientist at Mental Canvas. Julie came to Yale in 2002 from MIT where she held tenured appointments in both the Department of Electrical Engineering and Computer Science and the School of Architecture. Her research interests include material and texture models, sketch-based modeling, and creative applications of AI. She's received several professional awards including MIT's Egerton Faculty Achievement Award, the National Science Foundation Career Award, and an Alfred P. Sloan Foundation Research Fellowship. I actually changed my mind. I think we'll introduce the panelists one by one, and so I'd like you all to welcome Julie. Thank you very much for the introduction, David, and thank you for organizing this session on such a fascinating topic. I am a researcher and professor in the area of computer graphics. Just about all of my research over the years has been motivated or informed by architecture in some ways. Today I wanted to talk about two topics, that of material appearance design and drawing. Both have a very rich and interesting history, but I think they also both have very interesting features in the context of AI, particularly with respect to architecture. The tradition of appearance model is a very old one, and one of the reasons I find this topic fascinating is that both scientists and art artists have theorized about the appearance of the natural world for a very long time. For example, Rembrandt and his contemporaries during the 17th century were fascinated by the problem of creating realistic flesh tones, and so they developed these models in which they put various layers of lacquers and paints and pigments on a canvas and using that kind of set of layers, they could build up and mimic what real skin looks like. And on the other end of the spectrum, scientists have also been fascinated with the patterns. For example, the 19th century classical physicist Lord Rayleigh, who was really interested in things like why is the sky blue, why butterflies, feathers, and bird feathers look the way they do at right above the flies. How this kind of iridescent pattern would cause that. And also other questions about appearance of different kinds of scales, but all from a kind of physical standpoint. Today we use these kinds of insights, I think both from artistic and design disciplines, as well as physically, physics and related disciplines to develop computer models that can be used to make very, very realistic and accurate models of the two. I'm just going to say a little bit about what goes into a material model. And I want to get to the idea of how what role can artificial intelligence play in helping to describe material appearance today. So a material model, when we look at material, it's really composed of three parts. There's a spectral component of the color, there's a kind of directional component, which is whether it's shiny or matte or glossy. And then spatial variations of high resolution texture. And if this is just an idea of what it might look like on real objects, on the left we just have a perfectly kind of diffuse or matte surface. And then we've added in a spectral component, and then a directional component. And then finally you can see a spatial component from the texture is buried across the surface. Now unfortunately today in most computer systems, this is kind of what you're looking at in terms of developing material design. This is an example from Maya, but we could look at also something like Rhino. What you see here is a huge array of parameters. Many of those parameters, it's not just the sheer number of things that you're tuning, but also there's this idea that many of these parameters don't actually have any sort of physical basis. They're just kind of made up, they're like hacks. And that's actually how you specify materials often when we're trying to design something new. You can also of course bring in things from off the shelf, but I'm really intrigued with the problem of how can we actually design new appearances? So we've done a lot of work, my students and I, and things like texture modeling, material modeling, various kinds. This is one example of kind of bringing in real-world examples, learning about them, starting to integrate them into simulations and eventually maybe into new buildings. So for example, we could take this exemplar that you see in the upper left, and when we look at that we sort of want the most dominant texture there, but instead of just sort of grab it and synthesize it, we end up with this kind of odd looking frog instead of the one that we want on the bottom. So how can we kind of recognize from images the most salient features? We've done lots of work from exemplars. For example, in the upper left you can see some terrazzo where we can take from a single picture, actually synthesize the statistics of that into a volume of material that you see there. We can also do things like taking arrays of images. We can obtain or rate the array of exemplars, all kind of being pushed by the end user. Here's an example in the upper right. You can see we're actually doing some large-scale editing from a single picture, and now fine-scale editing. So we're able to kind of recognize a variety of different kinds of scale of texture in images automatically such that we can integrate them into a variety of kinds of applications. So kind of moving away from the Rhino or Maya specification of materials, I'm also interested in developing new fundamental models that will sort of sustain a range of appearances over time. And if you've been to Venice, you know that the kind of really pristine sort of images that you generate, computers just don't kind of cut it. So this is a long-term project to kind of develop both new underlying models here, in this case we're using layers, but also to look at the problem of kind of programming the surface if you will over time. So here we have a variety of different kinds of surface changes that are happening over time, and we can sort of say we're going to polish the surface with one operator, we're going to polish the surface and so on. So we have all of this high-level control, but in a way that's very intuitive and automatically is attached to certain parts of the surface. We've done other things with surface patterning, using simulations, doing a patterning that involves the changes to both shape and appearance, and we've also done a lot of work in things like capturing the way that appearance is automatically attached to shape, learning that, and then being able to synthesize it on brand new shapes. So left to see the capture in the left, and then in the right here you can see examples of where we've taken that detailed appearance over time and then applied it automatically to new materials. But I think in terms of this idea of exemplars, there are lots of additional interesting things that you can do. More recently we've looked at things that sort of revisited our work and generated flow patterns and said, well let's say we have some exemplars. These are real things that you might see outside this building. We can kind of now analyze those patterns, extract parameters from them, put them into a simulation model, and then be able to generate these kinds of effects attached to geometry and adapted to geometry automatically based on the underlying model. Here you can see some examples. We've also done some work recently on what I call tactile mesh tendency of wearable objects with one of the buildings that say how will material vary with use? Where will people touch shapes? How do you collect all this data? We've come up with, we did a bunch of user studies and we built a model and kind of using deep learning to estimate or rank different parts of the surface and we can kind of predict what will happen to a shape where its read points might be, for example. And we've also continued to build out kind of full systems from physical materials to physical realizations. And just to show you just an example of a piece of velvet. We're doing a lot of work on different scales. And here you can see something like a velvet pillow and resuming and start looking at some of the underlying structure. And we've been building interfaces so that you can actually kind of edit things at multiple scales at the same time instead of tuning sliders and Maya with the idea that you can have a lot of control both over the underlying structure of the material as a final look. And this is kind of a forward approach where you start out with let's say some material and some texture and then you end up with a final look that you see here. But even more interesting I think is backwards where you kind of start out with something, a given appearance maybe you find something out in the real world or a set of materials and you'd like to kind of work backwards to be able to create something like that yourself. So here's an example of doing a detailed analysis where we can actually come up with what the reflectance and texture should be. And more recently we've done some work with flock models with very detailed textiles. And again using some data driven models connected to procedural models where we can extract grounders. We can begin to do things like this with a very very detailed editing of a very complicated appearance with detailed structure. So these are just kind of first steps. I think this is a fascinating area and I think really the interfaces of the future for both creating materials and designing materials will largely be I think driven to some degree by AI and I think it's really up to architects and designers to decide what are the right handles. But more important I think the power of AI in this particular area is really kind of helping you achieve a particular appearance that might be interesting. But not just about things that exist already in the real world, but I think there's potential today with all the advances in fabrication to actually connect by the entirely new appearance that we've never thought of and dreamed of before with the ability to actually fabricate or create it. Next I'm going to talk just briefly about some work in drawing. Again, long rich history but this is what drawing looks like to us today and I want to talk a little bit about what drawing could be in the future. So drawing even with the state of CAD of fabrication today is fundamental to creativity and communication. It's been used by Leonardo to develop movie storyboards and movies from Disney on up to the present. Gary's conceptual sketches, product designs and so on. What's really fascinating about drawing is that drawing on a computer today is not so different than drawing was on paper during the Renaissance. Music, photography and text have been completely revolutionized by computation. But drawing is really larger than the scene. So one of the things I'm very interested in is looking at the space that sits between 2D sketching and 3D modeling. So TV sketching is expressive, fluid but it contains static views whereas 3D modeling is very rigid, cumbersome time consuming to create what offers dynamic viewing and I'm fascinated by what can happen in the space between them. My office is right near EroSeptons Hockey Arena at Yellow also known as The Whale and this kind of form is very interesting to me. How can we develop kind of drawing systems that might allow you to explore a form like that without walking into a very detailed geometric representation. These are examples of Henry Moore's idea sketches again, very free flowing he has talked about sculptures falling out of such sketches. Louis Kahn sketches at varieties of different kinds of scales. So my vision is to enhance visual communication with a computer by elevating the way people draw in brand new capabilities. I'm just going to skip ahead to show you a couple of examples. This is a new type of drawing a drawing you probably recognize at Grand Central Station but one that you can actually move through and it has all this animation kind of built in for free. This is just to give you a flavor of how this kind of authoring system works. You can draw just like you might piece of paper. You pick features in the drawing and then kind of add another drawing that sort of hangs off of that original drawing. So it's like you're drawing on transparent canvases in space. Add another sheet in the foreground. You can kind of build out a scene very quickly without necessarily knowing where you're going to go a priori and you can also have a number of different drawings that may or may not go together in the system at once. The idea of the system sort of idea behind it is really that that coherence is developed gradually. So these are some just added some water to the scene but you can see it's on the wrong canvas. So one of the things you can do here is you can select your strokes and now place a hinge in the drawing that is select feature and using the bird's eye view you can actually re-project those strokes. Not being translated, they're actually being re-projected and now you can see they're in the right place. So you can actually draw things and then kind of reinterpret your strokes as you go. So we've also done work along these lines within situ design that is placing drawings into real sites where we can make a very abstract model of a scene very quickly from photographs, elevation data and site plan and then you can create drawings like this in situ. Now one of the things we're working on right now is how can we do better than this? Right now there's very little when draw in a digital drawing system. There's no semantic information. So we've begun to do some work in not turning sketches or drawings into shapes but actually learning from a large scale user study. The kinds of strokes that people draw, for example silhouette strokes that indicate the boundaries of a surface hatching strokes which give a shading and stipple strokes which give us silhouettes from the light source. We've gone to analyze these things and be able to recognize them such that you can create a kind of drawing that is very, that's 3D but also very valuable and it's strokes that support an imagined shape without going to that actual final feature. So I'm going to conclude there and brings David back up. Moving right along with Natasha the Director of Learning Technologies at Jacobs and the 2018 Chair of the AIA Technology and Architecture Practice Committee. Natasha is the EDC Director of Learning Technologies at the bottom of Technology Group which you probably know of and you know their work and she was also part of this AIA community including a knowledge community focused on the intersection of technology in architecture. In this capacity she hosted the 2018 Building Connections Congress in Washington D.C. the conference that looked up the themes of design in the age of AI and machine learning. So remember this is the AIA even though traditional AIA is very interested in AI and looking at what that kind of thing is worth. We'll hear more about that. And finally while Natasha trained as an architect she has found that her true technology in design and I think that's probably the statement that resonates with many of our students at the school so thank you for being here Natasha. When I got invited to this thing I did exactly what I typically do just panic. While I am personally really interested in the themes of technology in designing and architecture practice I don't think that she is quite there yet. AIA as well and I do have this in my presentation that we talked about during my building connections which is about 100 people showing up and talking about in various themes. I think there is always this sort of sense of fear and excitement and that's an interesting concept when we think about technology and what it means for us. Similarly we've been talking about this within the industry within what we do on a regular basis within the next 10 years and Jacobs is primarily an engineering company but there is a study that says 4 out of 10 engineering companies in the next 10 years will be disrupted which makes us wonder which side we want to be on so when we started thinking about that we decided that we needed to be able to manage what we call an innovation portfolio we need to be able to drive the idea of technology and how we think about innovation within what we do on a regular basis we were thinking about a technology for an innovation portfolio that focuses on the near, medium, far and transformation of I sit in the building's infrastructure and advanced facilities for Jacobs which is only 33,000 people start which is a little bit of a challenge when you think about how do you try and drive innovation in such a big company and how do you bring that innovation to report how do you think about things that we are talking about right now on such a large scale we've been doing all sorts of studies really on what we call a part-time fashion technology project we do AR, VR we do all the material studies and we've been looking at the construction and working with NASA to do that but those are all sort of part-time studies that we do and we have billable work and when we approach it all of this sort of has to wait so we started thinking about how we need to do and we thought that we need to see the change we need to change how we thought about it so the first time in the history of our company we had a CTIO this chief technology and innovation officer at corporate level that has never existed in our history before and from there down we started thinking about ways we would have to focus on what we needed to do so this is my job right now I run an incubator program and an emerging ideas program living company the incubator program is not quite like you would expect as a technology accelerator program the incubator programs are very internally focused when you have 33,000 people to begin with you probably find enough people to have a conversation around technologies within your organization so my first workshop I had about 12 people and everybody from to an expert who happens to be a civil engineer to people who were talking about all the planning and essentially any of the interior designings people like that and the incubator program is really looking at these transformation ideas it is looking at we identified about nine separate themes that we thought we needed to focus on and from those we are currently working our way down to about four or five ideas we think that we have come down maybe one or two ideas that we are what could potentially transform who we are as a company I can't talk about them they are interesting to say the least on the other hand the emerging ideas program is the other part of the program that I run and that program when I say emerging I actually mean ideas emerging from the practice people who are working on projects with clients in the day and sort of coming up with these ideas saying if I could just do this if I could just take data and I could use this in three different ways I could potentially do something with that so the ideas can we take those ideas and lift them out of a client situation or a project situation fund that a little bit and see where we can take that so we've been doing different ideas from there but really the goal of that program essentially is to teach us how to think about ideas and we are all incredibly creative people in everything that we do coming up with ideas is the easy part what you do with those ideas when you have them is the much much harder task that we are trying to figure out just how do you think through an idea in an industry situation how do you build a business case around it or how do you describe how this is going to change what we do so the program that we are building off of them and we are actually working a little bit of a kickbox idea which you are familiar with is it helps you structure the idea in a way that you can proceed from it so sort of this big flash that you have and work through it what we do is we think of it in three different steps sort of generating, validating and implementing the generation is the way we do it the validating is how do you validate this is a good idea I built with the company what I call a discipline to create trust so I send this out to 40-50 people say what do you think this makes sense can we proceed with this do you think this needs to be funded how much if you will and things like that and then you take it into what it is for us in some ways I will be the hardest thing is the implementation idea so how do you start implementing this on more than one project or than one marketing project that we talked about that we can use it and then this is the other really really hard part of what we do the idea will be that we need to be comfortable with the idea of failure we need to be able to say that it's okay if this idea doesn't work out we invested a little bit of money in it but this isn't the direction we need to go this isn't quite working out so how do you take it through that this is also really really hard for us right this is also really hard for us because we are all you know when you come into a company in an industry you are very inspected that you will be successful at every step that you go so for us to make that culture change between that has been an interesting way for us to proceed so we are we are trying to build a culture of innovators not just innovation so we want to then think who can really make, who can take an idea structure it all the way through to a place where we could actually implement it on our projects and not be afraid of failure we ask for seed investment we ask for a group of concepts minimum deliverable of some sort and then we take it from there and then if the idea really does strike and invest it really focuses on and you know themes of AI are big there or at least taking the idea of data and switching it into knowledge and talent and what that means for us other than your data what does it mean in a technology in a knowledge perspective we take it into what we are currently planning we hope to take it into what we are currently planning we are planning pilot innovation centers across the world I should have mentioned we have while I sit in our buildings business we have a primary business which is aerospace and technology which is incredibly incredibly focused around the idea of cybersecurity, IoT and predictability so we are trying to emerge those sort of technologies and the expertise that we have in both lines of business and find ways that we can work together so these five separate innovation centers will focus on these five themes and the incubator and the emerging ideas program trying to feed into these ideas that we could work on so while this is a lot of really exciting I want to take a few seconds to show you things that are not quite as exciting but hopefully getting us there these are actual projects that we are working on so when we speak about AI and hopefully in our discussion I'm sure we will be able to suffer for us it's the idea of data and moving that into knowledge and when we think about that we think about how we want to acquire that data or generate that data how we are going to evaluate that data and then take it into implementation and so these are some of the ways we are thinking of acquiring that data this is a project that we are doing for transport for nothing and we use mobility data points across the city of London to try and track how people are traveling through the city to be able to help transport London with their congestion plans and then we sort of went into the evaluation of that data the simulation of the process is fairly expected this is the other project that we are working on which is the Port of San Francisco seawall project so we have now been a seawall around in some of this coast area and we have been trying to do to see simulations across that so how do you with this work in various different events so that's still at the simulation level we are hoping that we can take that into a more EIS way where we can start thinking about how this could potentially work just one quick thought you have two really big levers one is a 3,000 person group and now it's the AIA yes sir the final speaker of the afternoon before we go into the discussion is Julia Singa principal engineer at sidewalk labs before working for sidewalk as the tech lead on generative urban design Julia Singa founded Triposo a mobile travel bag in Berlin before that he worked for Google in Switzerland and Australia and he is the author of the bottom line he's learning good books it's a great pleasure to be here thank you so much yeah it's a great pleasure to be here I hope I can pull off on the other presentations just now where is this yeah I want to talk about 5 things in 15 minutes so that's going to be a little short I'm from sidewalk labs we were working to you know explore what a city could be like and one of the ways that we do this is using urban generative design we'll start with the underlying problem we're trying to build a district or a little on a city you have all these interlocking systems that you're all trying to build at the same time and they all have their own requirements and a lot of them they're simulation based and simulation currently is it's expensive it's slow and it makes for a situation where you have all these systems that talk to each other and it creates all this latency where somebody changes something and how you know the buildings that actually trickles into obviously the cost of structure and typically what you get is that these people then get together 4 weeks later and they have to redone their simulation but all the base assumptions have changed already and it just all goes each and every way and it makes it just hard to get to a shared set of goals that you can even if you get to the shared goals how do you make sure that you're collectively going to those things and that's sort of the problem that we're trying to work through how do we put this all together it might not surprise you that our answer is the end of the design the name of our team but I think I kind of gave the answer away by randomly clicking through the slide back so what we do is we start with a bunch of inputs and they vary from the existing conditions like what does the site look like what's the sort of climate that we have maybe we throw in some future climate scenarios too because you know climate change and then we have a bunch of things that we play with like the type of street grid that we want to apply to this the type of buildings how much green space do we want to allocate how does it work with the transits and so we have this nice big space input space that we then use to generate thousands not millions of different potential designs that we then can sort of explore by going to this huge end dimensional space where every dot represents a solution to this and then we this is where the simulations come in so of each of these things we have these basic objective functions we try to combine them into something that is more interesting than just an objective function that tells you something about all of the other life and then we run this simulation when we generate all these options we create this situation where you can actually explore these results so this is our playground city, it doesn't actually exist it's called the Britishville and it allows you to go through all these different things, you can switch on layers you can see what the street grid is, what the massing is like and you can sort of look at what the performance of this is and it gets you somewhere so the next thing, and this is where Alpromis, or visually tons of this will make in the first appearance is what do you do with this how do you optimize this because ultimately you know you want a manual out of this process you want to focus on the important thing and you know where the toilet goes maybe could be but what we do is we have all these these outcomes and you can squeeze through them but what we're really after is not having and this goes to the discussion level hopefully we have, we're not aiming for the computer to tell you well, this is the answer, go build it we want to get an idea of what the trade out is, what is the shape of the solution space here and one of the important things there is this this trade out between exploration and exploitation do you want to immediately go for the hill climbing do you want to find the optimal solution or do you want to explore the entire space and so one technique that is useful there is this concept of patient optimization that I wanted to quickly run through essentially in this case we're just simplifying things dramatically and we're just assuming that the shape of our space is well effectively one dimensional it's this blue line we've run two experiments and so all we know are the two red dots there and if we only wanted to optimize we're like well maybe the left one looks better than the right one let's explore around the left one but if you run the page in between this you will tell you two things and it will tell you what you know what scores to be higher but it also tells you where there are interesting things to explore so in this case actually if you look at this the most uncertainty and the highest yield of knowledge and scores to the left side and so if we do another observation there suddenly the entire certainty building explodes and now the model realizes that really we should explore more in the middle of that it sort of water balloons into the shape of what we want to do and this process allows you to continuously make this trade between finding the highest scoring solution versus exploring where there is stuff that you don't know known and the unknowns and all that and you keep doing that and at some point you actually do end up with a nice you know you start catching out of this line and you discover where everything is and in this case you still have some uncertainty on the right which is not actually going to bring us very much right moving on to slightly more experimental also machine learning so general adversarial networks they were all raised a year or so ago in machine learning land and so we decided to take them for a spin and see if we could do something with putting walls inside of apartments and so what we did is we got our hands on a whole bunch of apartment layouts and we wrote this algorithm which is not really machine learning that removed all the internal walls and so then the idea is that you train a generator that proposes a where to put the walls in their algorithm, the discriminator that gets the fake apartment layout and the real apartment layout and it has to guess which one is real and in the beginning both of these algorithms are very good but it's this competition between the two that allows them to get better and so over time the generator starts generating things that are very hard to distinguish the discriminator becomes better and better at distinguishing the discriminator and so this is one example of that so you have the input which is specifically the outer walls and we have what it actually should be displacing to the little too bright for this picture but one thing that I find fascinating is that our stripper actually removed the windows here and the algorithm put these double lines indicating there's windows here back where the windows are supposed to be so it actually recovers even the details that we inadvertently stirred out it does have a certain tendency to put, you know, star cases everywhere because it doesn't seem that much has no idea what the floor is of course and so similarly if you give it something that has all the furniture inside you know it doesn't know about idea about the furniture it just you know puts the star case back so so more examples and basically you know when you do this you get this machine that will just randomly draw all these possible floor plans alright try the same with this is sort of interesting with Google Maps Google Maps you can they have this custom view mode where you can hide certain details and so that's kind of nice for this sort of approach because you can take a tile and then say tell Google Maps to render the same tile with buildings or without buildings which we can then feed into this this GAN and the GAN now is to guess where the buildings go and over time it actually develops a notion of where the buildings go and these are actually, you know, not too bad one of the dead giveaways here is this area doesn't have any buildings but since the GAN was trained to put buildings it will generally always put buildings so final example this is another thing that is one of those machine learning things that are fairly popular because the visual effects are really good the basic thing is that you have a image of a certain style and you have a target image of the Golden Gate Bridge and the style transfer algorithm is capable of redrawing that image in the style of the style source image and so sometimes it works really well then go is always a very nice way of going in because it's first very expressive style if you take texture it's less clear that it doesn't work and so one thing that we were thinking can you apply this to cities the street network of cities can you capture them as a form of style that would be the best setup it wasn't the answer so again this does not render quite as nicely on my screen but you can sort of see that, you know, for Amsterdam you get like these sort of these fluid the roundish lines and lots of water features that are thrown in while for New York you get sort of like the basic grits that you expect sort of even in the orientation that you expect and then sometimes it throws in a little working to part like thingy and it has some broadway type of lines that cross the sort of thing and then finally for Istanbul the downtown very medieval streets that sometimes don't go anywhere and it has sort of this nice structure that it captures one thing that is sort of unexplained here is where the different different gray bits come from we don't know how are you for time, two and a half minutes alright so this goes somewhat towards one of the questions that I think is really interesting around machine learning and optimization this tradeoff between coming over something original and optimization that always leads to the same thing so this is part of our generator that proposes working with parks it does quite a reasonable job sometimes it puts a big park in the middle sometimes it puts parks on the side sometimes it puts little scattered plots of green everywhere but the problem is that if you make a little change in the input algorithm it's not really predictable what will happen parks might just dramatically change and that means that if you're going to do one of those hill climbing things you think that you might be going in the right direction but since these parks completely change, you can't really optimize and that's sort of frustrating the other thing is this observation from that Paul has that if you optimize everything you end up with one solution and the same thing which is the same which is not optimal so there's a paradox there and so one thing that we should be playing with, you might recognize this is to move away from the notion of optimization and more go towards growing and mutating and making this more like be more inspired by biology than economics I guess and so an experiment we ran is where we took an existing design for a partial in front that we're working with and we ran this mutator over it which basically just takes the buildings and just randomly changes them a little bit it might add a floor, it might delete a floor it might expand the building it might move the building around a little bit and it just creates all these different variations and then you run the same objective functions over it and you get a much more diverse outcomes that have actually improvements over the original plan but they don't all look the same so you can see that the top two have nice improvements the bottom one is way more dramatic and comes up with these super tall buildings that probably you really wouldn't do but it gives you this much more creative and interesting solution this is kind of a fittingly diverse group of presentations to you know get us started on what I hope will be a longer term conversation at the school about some of that so I have a number of questions and I could probably keep going for quite a while but I want all of the audience members to see know that I will first find myself and see if I can defend the assumptions to get your questions spread I guess one of the first things that I just wanted to briefly is education and specifically artificial education so one way to kind of frame a very complex question is to put it quite really simply in this context this world that you've seen what should we be training our future students doing should we be training them in math should we be training them in statistics should we be training them in coding and or should we be training them in those kind of qualitative exploration based ideas of human cognition and synthesis are more aligned with humanities and with literature and all that so I don't know if you none of you is necessarily exactly thinking about training our future students but in a educational institution but I wonder how given all of the work you've done you've got a lot of thoughts about that so I mean so I think first of all I think it's still really quite early in your development as you were talking about deep learning new things come out of it all the time keeping an eye on what might be applicable and what might not be seems like it seems useful but I expected over time a lot of these things will just monitor the behavior that you've been plugging that you can use for your work and you know math is probably overrated if you're if you're not not going with that so I think other fields of the examples there's like an abstraction hierarchy there are different people at different levels that can get randomly to code or just be plugging in modules that are controllable so I think that's part of the conversation I mean I didn't speak from just being an architect and doing what I told you earlier so I think that just from being an architect I don't I think it's the lack of humanities I think it's the ability to be adaptable I've worked with this company for 10 years and I've done the same job 2 years in a row I'd say and that's the different thing that the math and coding I'm not quite sure this is where we have to go right now but the ability to think naturally is really neat so we have a somewhat new major at Yale called Computing in the Arts which requires students to take half of their major courses in computer science and half in one branch of the arts architecture, theater studies, what have you and I think many so-called digital arts programs say you just take a bunch of digital courses and the idea here is that you would get an in-depth study of the humanities and the arts while at the same time getting an in-depth exposure to computer science I think in the future the best and most interesting tools that will actually further the field of architecture are going to be made by people who have an in-depth, they must have an in-depth exposure to both otherwise they end up with more of the sort of interfaces that I showed but I think what does an architecture student need to know today so in some of the models that you showed one of the things I was thinking about is they all have strengths and limitations and if you know something about the underlying models and what's out there you can actually be critical about the answers you're getting because if you're sitting there sort of navigating some curve or some set of characteristics and whatnot you really need to be critical about the answer like is this the right answer or maybe it's not so I think the architect cannot wrestle they cannot give up control I think so in order to do that you must have some knowledge of what actually you're looking at what are the underlying models I don't necessarily think that architects need to be able to program those models but they do need the controls that's interesting I think it also really needs a kind of approach at this school to all kinds of problems including competition related problems I think if anything you've got this known for having a critical approach to design in general I want to drill down for a second on the topic of machine learning that's you know I mean like artificial intelligence it's an overdue term probably it's got such a buzz now but I think when you say machine learning it gets a little more focused than when you say artificial intelligence and I'm going to try out for you a hypothesis that I've been developing about machine learning that I think is very relevant to its use in architecture and design and my hypothesis is something like machine learning of course requires data data equals past results if we're interested in innovation discovery exploration is there something compatibility there because although we think at first glance oh maybe machine learning can help us discover something really important that I didn't already know that was already there but we just can't see it with our human cognition the algorithm can detect this pattern that's really appealing for the kind of creative designer who wants to explore new time space but at the same time I think maybe there's a flaw there because in the formland example all the algorithm is doing you know it's only as good as what you tell that you want to do and all it's doing is trying to interpret like past floor plan so it's going to basically it's a version of the problem that some critics of AI and machine learning have applied to society and work generally if you require a judge sentencing people who are convicted to use past data then you're going to perpetuate some stereotypes based on past results and flaws in the data if we're going to require that a designer make designs based on past floor plans we're never going to get I don't know what we decide is like the best architectural limitations but the open plan that's not a machine learning technique you've never come up with that so I don't know do you have any thoughts about the potential promise of machine learning for designs but also this kind of version that I'm painting where it could be valuable for some things like efficiency approach but actually this promise of like discovering something great in the data might be overrated so many thoughts so I think you make a fair point especially around simple things like style trends or these sort of scams I think everybody's favorite kind of an example of the alpha go right where the machine actually does come up with creative strategies that humans have never thought of or have thought of but rejected as being possible so again that's a very restricted game space so maybe that's that's to be it like earlier I personally find really fascinating the sort of thing are embedding spaces where every point represents a solution and if you can get to a solution this is somewhere machine learning helps you project existing examples in that space but it's capable of showing you what's in the other points that you haven't visited there is actually this possibility to discover something I think if we look at how the streets and cities how do they behavior what is the layout if you have another rhythm that can reproduce working in Paris but also one in Manhattan what happens if you look at 80% Manhattan in 20% Paris or you know you throw in a little bit of answer to that and again that's sort of what I was trying to do in my presentation if you can create these spaces and then I'm not using machine learning now you have something to explore but it's necessarily only the observation step that you've seen I think it's the usual advocacy of the networks between ones that are not fighting each other but are comparing to each other I think about that as a human machine combination as well and I think that that makes sense so a partnership is one way to look at where I see that being seen the most happen There are possibilities particularly in things like strategy games where you can actually develop some novel concepts as you go but I really very much like this idea for example there was a recent paper about designing fonts where the system generates this manifold and you can kind of move around on this like mathematical construct but to your right what you're seeing is a letter being in a very complicated way that you might not ever think of doing so I think that one of the things that's really interesting here is the interfaces because I think the opportunity for invention will really not necessarily be seen by the algorithm but by that and you might just like you can see things in this font generator but again you would maybe generate tons and tons of versions and just not see the complex combination of parameters as possible and that was really powerful in your work where you see the human interacting with this complex data driven system at multiple scales and having to do these sites because of the way it's seen so I haven't seen the paper you're talking about but I can totally imagine working on the math side and then seeing the font develop and there's this new hybrid that you're describing of the human developing a new tool a new lever to discover something new along those lines one thing that we found in my studio using some of the same generative design system that we've got looking at data points which each represent a possible design solution just somewhat by chance we switched from using a version of just plotting points two dimensionally one by one so you would see like data points in the two example axes and then two other past seasons we started using an online tool to do that and the online tool had a structure that we decided to use you better see the points more of between one graph and the other graph and we thought this really cool but it's cool because the human mind can detect things in that representation that you wouldn't see if you just saw two static graphs by seeing the points move say from their input space visualization to their output space visualization and if their color in a certain way we started to see these patterns that were fully quantitative driven by the data but also taking advantage of human probably neuroscience right something about our brain's ability to detect movement so it's not only color it's movement and I think we probably know that because like the human mind is trained to see a tiger coming in our peripheral vision there's something about this new thing that developed and we're trying to investigate that more but the human pattern recognition is really interesting as well as the machine pattern recognition combining them how to interface that's a great word for it let's see make sure we have time for questions when we're out of time any questions can be about trends or specific this isn't kind of a broad question but I mean okay within the board I was a professional an architect professional an architect practice I guess the humanities how far are these tools or how are these tools kind of maybe going to say replay I mean is there like a way of this essential in your place like what we do now as architects or are we like really far away do you see any of this I'm happy to say that one I think short answer no I don't think it's replacing us any time soon and I don't believe I would never replace us as an architect I speak but I do think as well as she is in what I see that they're at a different point around gender to design I would say that's the point that is getting really comfortable for where we are at least what I'm seeing in everything that we do you know from there there may be others before you in a certain machine learning option gender to design is where we are which is how I understand it it's creating millions of options and you get the best options that's what we will do if we're not doing that already so before the spreadsheet was a spreadsheet you had people that did spreadsheets on paper and if you had a business and you would go to your account and say like hey what happens is if the interest rates rise to about 100% he would go to his big paper spreadsheet and fill out all the numbers in confidence or whatever at the end of the review and they're like what happens if it's one and a half percent he goes back to his room and he does the same thing again you don't get very much knowledge out of that and then computers had spreadsheets that did not kill the financial industry at all instead it led to this huge opportunity because you can explore these spaces whether they're financial or architectural much better if you have the right tools I will go to your question I just I want to say that I agree with that vision that's what we should advocate for but I think we have to advocate for in other words the flip side of that question is if 90% of buildings are already built without hiring an architect so we're already in a problem space is it going to be now 95% because there are these forces that we don't we never talked about in this school by the way but where are those forces already they're there for a reason because people making buildings is expensive a lot of people making buildings want to return on investment quickly it's a profit driven industry all industries but the states are super high so little by little as we talk about the most high end examples of AI I'm criticizing this up there too are we just missing slowly the forces that already have 90% of the buildings are going to raise it to 97% and we're just the frauds in this slowly you can go out of right and all of a sudden it's 99.9 schools closed the whole market I actually I think it's moving towards the defense but hopefully we're now moving towards the so I think that for example when we set the project now and obviously we're not these things that we're setting but we are always setting the short-term cheaper faster we get you to your building as fast as you can so we take less money because it'll cost you less money so I think that that will allow us to work on more stuff each 15% of the 27% I mean I totally agree but I think you're advocating for that you have to advocate for that your business model might be we can use these tools to do it faster so we don't have to charge you as much and therefore some of the 90% will hire an architect because before it was just too expensive we still make the case that we have to think that every single time we approvals I would also think that if you can actually bring in these objective functions and these learners that actually tell you that the lie it's proved that what you're doing is better than the baseline it's not just about making it cheaper and faster if you can prove that you have a higher value then it's hard to say no absolutely totally and you can like you said in the slides show the client this tradeoff this data rather than tradeoff between things that are important so I think it will require a little bit of active advocating for the value of it and saying look this is you're getting so much more for this sorry I think one of the places where we have struggled with the Basin and they're trying to change that especially around this data driven environment is to try and extend our effect on a building after it is built so we talk about exit management as being something entirely different but when we can't draw data throughout the value of the building when we say that we're going to do this kind of level sustainable design and then when we can spend 15 years proving that it does and then it should be and that helps us meet the needs more when we're simply which by the way I hope this might suggest maybe we need to be teaching tradeoffs maybe we need to be teaching post-op well I think that the engineering of an architect is a big issue or a very complex problem which is very large and the knowledge of doing that is more important than generating a lot of options do you see a way of using machine learning to encode this and bring the knowledge of the design decision process in your machine learning tools so two thoughts one is in the ideals case you hope that your algorithm can learn from and figure that out the other thing that I've been starting to look at is bringing important learning where you actually try to have a model that actually learns what modifications that you make which ones are good and which ones are bad and there's not just this point in space but it's like this actual journey from my building and the cut out and the reshaping and the orientation I think there are also possibilities for the stroke analysis that I briefly touched on at the end of my presentation so we can do some analysis in which you know 10 different people are drawing silhouette strokes and we can recognize that but we can also look at your strokes in particular and learn a lot about your design process so I think there are possibilities ideally assisting you in your own design process in your own methods, your own style rather than just kind of importing other examples I think this is a problem which is for all these people there's one where AI is kind of automating is in the prices of architects and scripts and digital changes and the other which you both just touched on is this idea of automating design new design tools that don't actually solve problems but they open up a whole way drawing 3D from a screen is kind of an amazing experience like the ones that we do and so I wonder what percentage of automation and what percentage of automation and I don't really know which way the industry is printing I know which way personally would love to see it go but if you're sort of going through automation you would be exactly right but I think that's because it seems it's a certain direction it's there, you can see it you can comprehend it and I agree with you, I think that there is a whole lot of some conversation and some more mental design which is why we're starting with that Thank you so much Coral it was very interesting Yesterday at Columbia we had a conversation directed by a mural scientist who specialized in AI called Human Rights for the Future discussing the ethics of neurotech and AI so this person spoke about the emergence of brain-to-computer interfaces so artificial intelligence is very much inspired by neurology so with the emergence of this knowledge comes to the issue of a number of ethical issues with the brain-to-computer interface and the access to augmentation but also the feedback of that which is deference to thoughts so a sort of expansion of privacy issues that we're facing in the moment and then the idea of neuro-rights so in terms of identity and free will with the emergence of AI and the mapping of the brain an understanding that of human real is determined by that as we don't necessarily have free will as an individual but maybe we have free will as a patient and anyway so we talk about also equal access to augmentation of AI and to digital tools et cetera so there's a sort of emergence of human rights issues around AI and so in relation to that around AI and neurology I'm interested in talking about architecture as a practice that's very much rooted in a territory that's at peace so architecture only in the traditional war and then the territory and the society is standing after that for us to be constructing buildings it pre-requires a stability that we can't keep for granted but so if we sort of zoom out of that and then consider the fact that architecture happens without architects and we try to understand our positioning as people who can go back and forth between the humanities and the sciences right we have the sort of cognitive ability that's a bit unique and that is not necessarily encouraged by the structure of academia so in relation to that I'm interested in in looking at architecture as this practice that's rooted in the sort of you know in this idea of like peaceful territory and that looks to preserve that peace by having interventions that are future oriented and sort of foresee possibility for conflicts so in that case it's like the conflicts that AI and neuroscience or the brain to computer the risks of computer interface can bring forward so I'm wondering if that's if that's something you're interested in like commenting on or if you see a sort of agency that on field has with the emergence of these human rights issues in relation to AI that that we're suited to tackle that maybe not you know that because there are extremely complex issues that definitely require a lot of specific knowledges and I'm not sure that people trained in human rights or people trained in neurology are necessarily you know best fit to approach these problems that require the sort of access to all of these disciplines and as architects we have always been orchestrating these various knowledges together like working with engineers, working with politicians, working with volunteers, working with the people and sort of synthesizing all these voices through our intervention so I wonder if you would like to to talk about that or in relation to education or in relation to you know how we position ourselves professionally with the guys who is massive transformations that have been sounds like quite a talk I think the process consensus and I think you touched on that a little bit so I'm not in theory I think that we have to be able to stand, to be building a consensus with in general to join for all of these fields and I think that there's a slightly more on the idea of peace building and I think that's back against architectures, peace and values are really practical exactly, one of the things we are doing with one of the SVAIs and reconstruction is we are working with the Army Corps of Engineers to try to see that can you use machines to do it and then even work on the zone so you don't have to send your people in the enhanced way of doing the instructions that we needed in that state and that's really great as you think but just the idea of what technology and architecture can do I think that has to be one of the implications to people in that sense I think we're just about running out of time but if there's this is half mark but kind of touching on what you were saying machine learning either without data where first the assumptions come first not the data if there's any thoughts on the implications of that or models that are not predictive or predictive that don't contend on a set of large set of higher assumptions and predictions, sorry a large set of prior observations made about the world and then sent to a form of data that are then used to construct the model I'm assuming after a second it feels like it's the predominant problem of all the structure which then has a lot of bias they need to model they can do the data they can do it also the economic scale involved in gathering this kind of data and that's like what kind of actors can use what kind of data what kind of actors can collect some kind of data and also what data that's done exists in the first place so in terms of any thoughts on are there alternatives to a data first model in your practice and how would that have a purpose so we do a lot of data collection in our work and often my students will kind of have an answer in mind, like we're a model in mind but actually I try to encourage them to be very open in looking at it that we don't know yet like we spent collecting a lot of data maybe starting with some question but then I think models really can be built on what you learn from that data there's sort of an art and science I think to asking the right questions about that data like to get the right insights but you need to build the model first I think you can actually start collecting information and with an open mind sort of look at things as they emerge and then sort of drive a model from that then I guess I was going to ask about possibility of inertia for a very little deep blue, the name was based on deep thoughts from this hereditary galaxy in that computer they switched on without connecting to any data and you know got some income tax before somebody mentioned which is off and then you know and it's obviously but I think that the assumptions are always based on data, whether that be as historical or education, you know when we make assumptions on how model needs to be on some sort of issue so by the answer I think the model the assumptions and the data and the data it's just I'm going to go to the first I think the other other question where you look at what you know, some data is just more easier to get and how much does that drive the development is we have to make media so the certain competition doesn't require data if a hammer doesn't generate data it doesn't operation any competition because you know there are other ways of thinking about artificial intelligence but it's data that's maybe just to have one quick kind of final question as I wrap up I'm wondering if any of you have thoughts about practice or industry versus academia you know what kind of things all of this discussion should be happening in practice what kind of things should be happening in academia what should the worlds be another way of finding that would be who's leading in the context of data architecture who's leading at the moment but I think maybe more broadly what are the models that should be, that should be but not the architecture that should that's where we are at I agree and also just with computation more generally often architects are sort of adopting tools often that are generated in completely different industries but ideally I think architects should be driving that I think often in schools of architecture it's often practice to be stripping with but I think there's a real need for sort of long term thinking about that that's a perfect way to end right that's a challenge but thank you very much