 So my name is Bolmani McClendon. I'm a computer science student in university right now. And in the summer of 2016, I worked on the software for an art project called Shroom and Lumen. Shroom and Lumen is an art installation made up of five mushrooms that glow and move. There are about 1,500 LEDs in each one. And the tallest mushroom is over 20 feet tall. The body of the mushrooms is made of translucent, hand-folded plastic. And each mushroom has a pad out in front that you can step on to trigger the cap to extend. So initially, we prepared this for a Burning Man festival in Nevada. But we were also invited to present this at a design festival in Dubai in December. And I joined the team in early July, about two months before we had to take it out to Burning Man. And at the time, we had no software or electronics. So my main role was to build the software and then work with the electronics lead to do the hardware software integration. So previously, I didn't think of myself as someone who'd be contributing to an art project. And most of my experiences in web development so far, so it was kind of unexpected for me to jump into a hardware project. But I was motivated to do it by the creative director for the project who I met at the company I was interning for at the time. He was an aerospace engineer by training, turned product designer that did these large-scale art projects in his free time. So I figured that if he was able to do it, I should give it a shot. But this was a bit scary because I was setting out to build a major art installation that needed to last in the desert for over a week, doing all the software and electronics work within two months' time outside of our day jobs. And it kind of fell on me, someone who had limited experience with developing for hardware to make sure that the software that enabled these things to function was solid. So time was very limited. And one of the things that I found working with other heavily invested designers and engineers was that everyone wanted to try out a bunch of different things to test concepts rapidly. For example, different interactions with the pads, the lighting, and so on. And so because there were so many changes that we were making, I found it difficult to make solid progress in the development of the software, which was becoming more and more critical as our deadline was approaching. So one thing I quickly realized was that we needed to make it easy to experiment. And to explain how I carried this principle into the project, I'll kind of talk about the system that we actually implemented. So each shrooming has one Raspberry Pi that acts as its controller. And in addition, each shrooming includes three subsystems, a weight sensing pad, a cap extension system, and a lighting system. The weight sensing pad is comprised of a weight sensor and an LED strip that changes color when someone steps on the pad. And both of these two components are connected to pins on the Raspberry Pi. The cap extension system is comprised of a motor controller and a linear actuator, which is essentially just an extending rod. And the motor controller provides voltage to the linear actuator when it gets a signal from the motor controller, which is controlled by signals being sent from the Raspberry Pi. And so when someone stepped on the pad, the linear actuator extends the cap's head, leaves it in place for 60 seconds, and then brings it back down. The lighting system is made up of those 1500 LEDs I mentioned in strips that connect to a unit called a pixel pusher, which is essentially an LED device driver that allows us to map images to our matrix of LEDs. And all the pixel pushers from each of the shrooming are connected to one wireless router. From there, animations are controlled wirelessly by a tablet application, as you can kind of see in the GIFs over here. So on the southward side, abstractly, each of the hardware subsystems communicate with a central server that controls the overall state of each shrooming. And the method for communication is HTTP requests. And the reason for this is that it allows the subsystem code to remain really dumb and then moves all the logic for the shrooming to one place on the server. So in practice, when the Raspberry Pi boots up, it uses Node.js to run a local Express server. And in addition, C or Python scripts associated with each of the hardware subsystems are configured to start when the Raspberry Pi boots up as well. And so these scripts are responsible for interfacing with the hardware by writing to or reading from signals, reading from the Raspberry Pi's GPI opens. And essentially, these scripts just pull endpoints on the server to determine what state they should be in or, in the case of the sensor, to report data back to the server at consistent intervals. So given that we have this pretty simple process for communicating with our hardware subsystems, we can focus our attention on the Express server where the logic is handled. And so there, I just chose to use an object-oriented structure leveraging the ES6 style JavaScript classes. And with this structure, we can define the high-level interactions between the subsystems comfortably. The state of all the subsystems are just managed in their associated classes. And then applications that are polling the server, those hardware subsystems, get access to the state through getter and setter functions that are described in the classes. So let's take a look at the basic structure for managing the color of the pad lights. So on the server, we have this class. It has a simple constructor that includes a state variable. And then it has a getter function for sharing that state, as well as a few functions for setting that state, for example, setting the pad to red or to green. And then we instantiate an object of this class, which other classes on the server can then use to change the state with the setter methods. And then in the request handler, which the pad light subsystems scripts will be hitting, we just simply return the state that that object has as a response. And so then the pad light subsystems scripts get that response and change the color on the pad lights accordingly. So the reason why this helped a lot for us was that it broke the software into a set of easily testable, easily reasonable subsystems. It abstract the functionality of each of the subsystems so that it was easy to do quick iterations of relatively complex coordination between the subsystems. For example, writing logic like when someone has been on the pad for four consecutive readings, turn the lights on the pad red and extend the linear actuator for 30 seconds, wait for 60 seconds, then retract the linear actuator for 30 seconds, and turn the pad green again, became really easy to do with just a few lines of JavaScript code on the server. And that was helpful because it just allowed us to test a lot of different concepts with our interaction paradigm really fast. And because the lighting system was controlled by a tablet application, it was actually really easy for us to test out a lot of different animations and colors, choose the ones that we liked the most and then use those ones at the festivals that we took the piece to. And lastly, this kind of played to my burgeoning strengths as a web developer because it made it easier for me to move fast and it gave me the opportunity to potentially include more web-like features in the future. So after many weekends of working and a few tests at our build site in Palo Alto, we were ready to take it to Burning Man. I was working in internship at that time so I couldn't actually go with the team to set it up, but I waited patiently to hear how it went and it turns out that it actually went well. The team sent me these super low-res photos as soon as they got them set up. And then later on, I was able to see higher resolution photos like this one after the festival it ended. And so when we were invited to come to Dubai and show the project, I actually did get to go and it was super cool to just see people interact with the project for the first time in person and I got the chance to do some live changes since, as we all know, software has never finished. So while many tourists in Dubai might have had an experience that looked something like this, for me, it looked a lot more like this. And I was shocked by the different types of experiences that people had around the art. Everyone from small children to the elderly were able to enjoy it. And when people came to me to talk about the project, it was really cool to talk about it not just as an engineer, but also as an artist as well. So in conclusion, before I joined the Shreem and Lumen project, I would have never imagined myself implementing an art project. Yet by the end of 2016, Shreem and Lumen had been enjoyed by hundreds of people across two showings and counting. And so through this experience, what was reaffirmed for me was that software is more than just a tool and that coding is more than just a job skill. Software is really a medium and creative expressions using it aren't limited to just screens. I'll be around to talk more about the installation if you are interested, but as a final note, I encourage all of you to take your skills in computational thinking, engineering, and mathematics and give making art a shot, collaborate, innovate and keep building beautiful things. Thanks.