 As you can see, it's worked at least somewhat successfully since I'm controlling my camera from my computer right now. So a few months ago, I got my first camera ever. It wasn't just also a phone. And it's a lot of fun. I take more photos. I buy more cameras. I hopefully take better photos. But it's also a little intimidating, because my camera has tons of settings. And honestly, even now, I still don't know what most any of them do. Or even if I have an idea of what they mean, I don't really know how to use them. But I decided to start small. And most cameras start with something called scene mode, which is just a bunch of presets for various situations, like a mode for sports, or a mode for food, or a mode for fireworks. And there was one preset. It's called Live Composite. And what it does is it blends a series of photos together. And that sounded cool. So I tried it out moments. This is what the result looked like. Sorry, just got to transfer that back to my computer. Do-de-de-de. Do-de-de-de. Do-de-de-de. Do-de-de-de. Do-de-de-de. All right. So that does not show up well. So just picture something amazing. Um, all right. So yeah, it's obviously amazing. And it's weird and cool for those of you who is everyone but me, who can't totally see what that is. It's a picture of a bridge, but really, there are just lights everywhere, and I didn't know what had happened. I figured that it would just be a continuous exposure, or maybe it would blend images together, but the effects were way different than I thought. For instance, if I tried to actually hold a camera steady instead, then the result was... It's a good thing I don't actually have that many slides in this talk. So, I might have to cheat on this. These are not showing up that well. I'll show you the live view and the fake view, and this. So, as you can see, the effects are kind of cool. It's a shot of traffic, but there aren't any cars. There are just the street lights that are remaining, and I thought that was neat. So I decided I needed to do some experimenting. So my idea was, well, couldn't I just point my camera at a screen? Then I could just directly control timing and what was actually on the screen, and I could work out what was actually happening. So, just sort of do that at the same time. So, what I figured is I'd just use Pico 8, which is just a fantasy game console. You just program it in Lua. It's super not powerful, but it works at exactly 30 frames per second, and I know how to use it, so good enough for science, maybe. And so, what resulted from that was, let's see if this shows up any better. It does. It was pretty much just a white screen. So as it turns out, 30 frames per second, maybe not fast enough. It turns out that the delay between exposures was less than 33 milliseconds, but even at 100 milliseconds, I could see individual dots, so that meant that they weren't being fully exposed, and then it wasn't until 200 milliseconds that pixels just appeared fully there. So I tried a bunch of experiments, just moving lines across the screen, other basic things, and I felt I mostly know what I'm doing here, but it would be cool if I could actually program something. Now, unfortunately, my camera isn't open source, so obviously there's no way I could just control it myself. So as it turns out, there is a camera that Olympus makes that is open source. It's a weird little thing. It's designed to clip to a phone, and it's controlled over Wi-Fi, and the API is open source. It looks something like that, and I don't own that image. That's Olympus' image, so don't sue me. So they have a manual describing the protocol. It's 120 pages. It explains in detail how to send commands, how to receive notifications from the camera, how to get a live view, and the good news is it just all works over HTTP, and so obviously that's really bad for security, because anyone could just connect to the network and hack that camera, but the good news is I am anyone, and I do want to connect to that camera and just hack anything. So I figured if their spec is this insecure, then their private spec is not going to be more secure. It's not going to be better, so it's probably going to just be the same thing, and you know what? I was actually right. It is almost the exact same, and one of the nicest things they've done, and anyone who makes APIs that just want to say, please do this, please do this, please do this, one of the commands just explained all of the commands that you could run against the camera, and it told you all of the parameters that you needed to pass to those commands to get them to work. And this is an amazingly nice thing considering no one is supposed to be writing apps for this camera except for Olympus itself, but obviously I'm not complaining. So it didn't actually take me that long to work out how to take my first photo. I just had to switch the camera to shutter mode, close the shutter, open the shutter, and switch out of the mode, and as you will not really be able to see, here's the first photo. This is an important lesson about testing that things will actually look right before you give a talk. Sorry, anyone who just got blinded by the flash. So I got my first photo off, and it was exciting. I had done it all manually. That seemed pretty cool. But unfortunately, here you can see the photo there. I actually took all of my head, not just the lower half, but whatever. So actually, this is why this is the second photo. This was the first photo that didn't show that I was in a very messy room and only sitting around in my underwear. But this is the first professional quality photo, which you can clearly see from just the lower half of my face. So unfortunately, though, as soon as I tried to automate it, it would fail. The camera would still be writing to the memory card when I tried to switch out of shutter mode, and it would get mad at me, which seems fair. But I didn't know how to actually see when the memory card was done reading. On the open-source spec, they had a way of triggering notifications, but that just wasn't a command that existed on my camera. So it seemed like I was kind of stuck, except for I did have a remote that worked with my camera, and it did work, so clearly it was finding information out somehow. So I did the easiest, most logical thing. I made a fake server that pretended to be my camera and just talked to it instead. And as it turns out, that's pretty easy. I just said 200 okay on, like, everything it sent me, and whenever it seemed mad about that, I'd just copy what was actually being passed by my camera. And so pretty soon, my phone was really sure that my computer was a camera, which was not kind of mixed feelings. But unfortunately, I still couldn't actually get it to take photos. When I would try and switch to the mode to take photos, even if I turned off live view, it would just immediately fail. And that seemed odd to me, since I could see all the requests and they were all passing. So I did the thing I was hoping I would never have to do for learning how to do something. I opened up Wireshark, a tool I don't really know how to use. So apparently, Wireshark, pretty nice and simple. If all you need to do is find a single TCP message, or I don't even know if the message is the right word for TCP. What I'm saying is you don't need to know a lot to use Wireshark. Yeah, I found a single request that was going to ports. And so I said, well, what happens if I just put an Echo server on that port? And suddenly, my camera could switch into photo mode. It was exciting. As a brief aside, I had really hoped that I could work out how to get live, the live view mode working just by streaming a movie with VLC on my computer. Unfortunately, that didn't work. I was hoping that would be the best hack I had accomplished during this whole thing. But it didn't matter, because I could see that all I needed to do was listen for notifications. And the notification spec that they had made up was pretty simple. It was just an ID, a length of message, and a single element XML document, like maybe not the most efficient way to send messages, but it worked. So whatever. And now I'm still in their debt, because they told me what all the commands were. So I was in a place where whenever I pressed buttons on my camera, I could get notifications about what the camera was doing. Unfortunately, remote calls didn't seem to trigger notifications, which seemed weird, because I didn't really care if I was pressing the button. If I was pressing the button on the camera, I knew what it was doing. So I was frustrated by that. So I decided to move on to other things. I tried to get switching options working, and that was failing. And then I tried reading options, which I had done before, and that was failing. And I learned something. Sometimes the best thing for a project is to just take a long break. I was getting frustrated, and I wasn't being smart about how I was testing things. And after that break, I thought, hey, maybe I should just test that this still works with the remote. And it didn't. So I learned the more important thing about all projects, why would you assume other people's code works? So as soon as I reset all of the settings on my camera, suddenly I could get notifications and I could switch settings. And I was pretty much done. Incidentally, while doing that, I learned there was a reason why the camera always switched away from shutter mode when it was done. And it's because for some reason, even if I closed the communication over the sockets, it wouldn't close on their end, so it just would not listen for any new connections. But whatever, they told me what to do, and it worked. So eventually I got to the point where I could take a photo and actually transfer it off of my camera. And as you'll see, fingers crossed in the photo, just like fingers crossed, you'll be able to see it. But it was pretty cool, because I actually had, oh yeah, it is works. Fantastic. Sort of. I actually take better photos than this. I think it's obvious through all of this. I'm not trying to brag, but I can do a bit better than this. So now I was in position, it was actually fairly trivial to get a working camera app that I'm using now and works better when you're not taking pictures of photos and then putting them on to a screen. But it was sort of amazing that after all of this work, the actual program I wrote was really short. It was fairly simple. It was not short and simple getting there. So the lessons from this I think were that underestimating things can be really good. I would not have done all this work if I knew how hard it was going to be. If I'm going to be honest, I might not have done all this work if I hadn't promised to do this talk for BangBangCon. But the thing is that when I actually got things working, I was very excited. My friend Eric can attest that every time I got something new working, I would get a notification. I would run over to his desk and be like, Eric, look what I did. Look what I did. He would be all right with the fact that I wasn't doing work even though I was at work because BangBangCon was coming up. But also you don't need to be an expert to do things. Obviously I know only rudimentary stuff about every single thing I did to make this and it still worked. Also, if you break things into small parts, it's really nice because there's lots of times for you to be like, I've got everything working except for anything new. So yeah, the code is online. This handwritten link will probably not be readable. Oh, I also should apologize. The photo thing didn't work twice because actually my photo printer ran out of ink while I was printing slides. I actually had to cut some. Luckily, slides didn't work anyway, so it wasn't really a loss. Thank you.