 Check check check one two check and on behalf of my fellow organizers I want to welcome you to the inaugural bang bang con West This is a conference about the joy excitement and surprise of computing It consists almost entirely of ten minute talks and as is appropriate for such a conference I will keep my remarks short as well Let me share some fun facts about our attendees the attendee who came from farthest away is One of our speaker speakers actually Nick Piesco is here all the way from New Zealand and The attendee who is the closest who I? Won't announce their name because I haven't vetted this with them, but they're they actually live on campus and Incidentally Julia Evans is the only one who registered who has a last name beginning with E Yeah, so we're so excited to have you all here. There are over 200 of us including the organizing team and We're just delighted to be hosting this conference for the first time here in this beautiful space I have a few bits of logistical information to share if you need to use the Wi-Fi You should use you see SC guest and then follow the Byzantine instructions If you need to use the quiet room we have one of those This is a space where you can go to work or get away from the crowd. It is across the courtyard On the other side of this courtyard and it's room 180 in the engineering 2 building. There should be a sign on the door The schedule for the day is on our website there it is and so This shows what we're doing today in particular. We have an unconferencing break at 320 which Jess is going to talk about After me and we'll explain what that means What else let's see oh Yeah, importantly, I want to say thanks to all of our speakers and Just as importantly, I want to say thanks to everybody who submitted a talk proposal We got over 200 talk proposals for this conference and we were only able to accept 30 of them So thank you for everyone everyone who submitted a talk proposal whether it was accepted or not You're the ones who make this conference happen. Thank you Just as importantly, I want to thank all of our sponsors So it costs a lot of money to put on this event my back of the envelope calculation is that if everybody here paid $300 we wouldn't have to have any sponsors, but of course we don't want you to have to pay $300 This is a pay what you want conference. So we make up the difference by having sponsorships So I want to mention each of our fantastic sponsors The as I mentioned the Baskin School of Engineering is one of our sponsors as is MailChimp They were both at our phenomenal sponsorship level At the excellent sponsorship level. We have Stripe, Airtable, GitHub and Twilio And last but certainly not least at the awesome sponsorship level. We have the recurse center More logistical stuff There happens to be a Thanks for mentioning that All right, we're gonna pause and figure that out Check check check Check check check one two one two. Hi Mirabai Testing testing a b c d e f g. Let's go to the alphabet. Can you hear us Mirabai? Test test test test test test one two three four five six seven eight nine ten eleven so thirteen fourteen fifteen Sixteen Joshua Mirabai Something I don't know Test test test test test test test something test one two three four five six seven eight nine ten one two three four five six seven eight nine ten One two three four five six seven. So I j k lm and old testing Testing one two three All right organizers. Can you please come back. All right a few more logistical notes I've been asked to mention that along the left sides of the aisles there are left-handed seats, so those of you who are left-handed may want to use those. In addition to that, we have a hashtag, an official hashtag, which I have mixed feelings about official hashtags, but we have one. It's bang bang con west. We have two all-gender restrooms, single occupancy, all-gender restrooms in the back of this auditorium. We also have men's and women's gendered restrooms across the courtyard in the other building. I think it's time to introduce the organizing team. So let me say a little bit about this team. So I've been helping organize bang bang con in New York since 2014. New York is where it was founded. This is the first time it's been here on the West Coast and we decided to bring in a whole new organizing team to make this conference happen. I've been amazed at these people and what a great job they're doing. All right, we'll do the best we can. I'm gonna go ahead and introduce the organizing team. So there are ten of us. We have Jess Retter, Sarah Chikazul, Dev Purandare, Dima Abuadis, Maggie Jo, Gina Lee, Taylor Hodge, Janice Hsu, and Joshua Wise. All of these people have been working their butts off to make this conference happen and I'm incredibly grateful and proud to be part of this team. All right, with that I'm going to turn it over to Jess who's going to explain unconferencing. Conferencing is fairly simple in that it isn't organized. So what it is is it's an opportunity for self-organization. So we have some time set aside for it today from 320 to 445. My handwriting got sweaty and rubbed off because I'm nervous. But 320 to 445 today and there's a sign-up sheet just outside of that backdoor. And what we'd love to see is for people to just have an idea. Like there's already some up there. Someone's like let's talk about the favorite bugs that we found in the last year. So what you need to do is put what you want to do. And oh, I'm hearing the doubling too. Fun. What you want to do and where you should meet. So be specific because everyone's gonna be meeting around in the same space. So be like the front right side of the auditorium or this door or something. But don't say this actually like say which door. And so and I think some people have already like I said written some stuff down about different hikes and other things. It does not have to be technical. It's an opportunity for you to kind of meet birds of a feather and talk about things that aren't on the schedule that you would love to talk with other wonderful guests about. Sorry. It's to 4.15 not 4.45. So please. Oh, that's the other thing is we do really want to be conscious about being respectful of the speakers that are coming after on conferencing. So please try very hard to successfully make it back in time to like be in your seats and give them the audience that like we would love for them to have for the talk they've been working on. Okay. Thank you. All right. One last thing. I am going to talk a little bit about our code of conduct here at BangBangCon. We really want BangBangCon to be a welcoming experience for everyone, no matter your race, gender, age, sexual orientation, anything, your technical background. So it is very important for us. It's kind of a foundational concept. This is the thing we picked up from the original BangBangCon is we take it very seriously to treat others with kindness and do not engage in jokes or inappropriate comments or anything that would make other people feel unwelcome. If you feel like somebody has done something that is against the harassment policy, which is on our website, I might pull out to read a relevant section of it here. I'm very organized. And please, if you would like to report an incident, you can talk to any of the organizers in our red staff shirts, and we will make sure it is responded to appropriately. And another thing that is it is not obviously just comments. It is also, you know, inappropriate touching, following people around inappropriate photographs. Are the lanyards do indicate whether you are okay with photographs? A green lanyard means you're okay with people taking your picture. A blue lanyard means that you would prefer people ask you before posting anything publicly. And a black lanyard means you do not want anyone to take your photograph. It is okay to, you can change up your lanyard color from moment to moment if you want. But it's important, try to respect that if you're taking photos of other attendees to respect their wishes, regarding that. And the other thing is, in addition to the actual harassment policy, if things, the minimum that will happen if you make a comment that is extremely inappropriate is we'll tell you to not do that again. But if for serious violations, you may be asked to, people can be asked to like leave the convention because we really care about making everyone feel safe and respected. We, in addition to the harassment policy, we do have the social rules, which are a thing that we're cribbed from the Recurses Center. And this, these are things that will not get you kicked out of the conference. It is just a set of rules about being respectful of other people. And these rules are no feigning surprise, which is if somebody says that they have never used the command line, you can go, what? You've never used the command line? Because that's not a helpful comment for anybody. And it maybe makes that person feel like they're not welcome at an event like this. And another thing is no well actuallys. I, as a native well actuallyer, know that it feels very important that everybody be factually correct about everything. But it is another way that makes people feel like you care more about being right in a situation than about actually having a useful conversation. The third rule is no backseat driving. This is more appropriate in a working type situation, but it can help happen in conversations. If somebody is talking about a different topic nearby, don't just shout your opinion into it. If you would like to join that conversation, go over and join that conversation. Don't try to inject yourself into everything that you overhear. And no subtle isms. And that is things like, oh, that's so easy, my grandmother could have done it. Those are things that they're not incredibly offensive, but they do encourage, they're a mindset that certain people are not welcome in technology, and we want to discourage those. Now, if you or someone around you engages as one of these behaviors, it's more like just say, hey, maybe don't do that, or come talk to an organizer and we'll tell them because again, we want to create a welcoming and friendly environment for everyone. So I hope all of you have a wonderful bang bang con and feel welcome and loved. And are we ready with the technical? We're good to go. Okay, then I am delighted to announce our first keynote speaker is Lynn Siren, who who will be talking about the best parts of my favorite things. Is my audio good? I can only hear me. Okay, thank you so much. Bad, amazing audience. Are you ready to hear about the best parts of my favorite things? Well, I love it. Thank you. Okay, so my first favorite thing is this conference talk. It's me sharing things I think are cool. I love these things so much and I would like for you to love them. You should use them too. In my opinion, they're really cool to use. And actually more important than using them is build more things like this. The more things that there are in a world that people really, really love, the better we all have experienced the world. And also, this conference talk is they've been cons keynotes are like, they could be 40 minutes or they could be 10, like everyone else. I kind of tried to go for the middle. So my favorite thing is not mentioned here. If it feels like I really, really like something else, it's not on this list. I totally do. You should come poke me and be like, Hey, you sound like you love Redis. I do. Also, the last thing is if you really like it, that makes it good. Go you have fun, make stuff. Please love all of your amazing things. And I try my best to not have any like, here's this other thing I don't like everything in here, things that I really like. Alright, so my first part first section is my favorite ways to read stuff. I find as a programmer, most of what I'm doing is reading code, like all the time. I can't read books anymore, because code is just so interesting to me by contrast is like, completely reformed the way I want to read things. But now I realize that I have like all of these patterns around how I really like to read code. And one of them is, I really like reading code under version control. I figure most people are using version control, but I really like to talk about like, why I really love using this thing. The first thing is that whenever I join a new organization, I walk the get history and every large repo all the way back to the beginning. I time travel all of your commits. So like, think about that when you're writing the commit messages. But like, it's really fun to see people from like, I'm like joining this org in like 2019. And I'm like, yeah, it's like an established startup, like everyone's been working here so long. I'm looking at commits from like 2010. And they're like, does this even work? It's so fun and cool to like time travel to different parts of your organization, different parts of your code and see how people were experiencing things and get and version control and SVN and Mercurio, I don't know the other ones, they all allow that. And that's really cool. Second thing is that resolving complex merge complex is like ultimate teamwork. I'm like working for my coworker. And they're like, I'm gonna work on like these three lines. And you're gonna work on like, those four lines here. And we're just gonna we're gonna we're gonna squeeze in we're gonna like put two Jenga towers together. And then they're gonna come together to make like a million dollar product. No, really. I promise. And that works on a regular basis, like I'll constantly tell my coworkers like I might edit the lines of perspective database and you get the application level and then it just works. Right. And you can have like hundreds of people doing this at once. And that's so cool. Like, you can't do that with like a word document. Well, okay, thanks Google for allowing us to do that for word document. But before Google Docs, you couldn't do it with a word document. My last thing that I really, really love about version control is that multi threading makes me feel like superpowers. So specifically the superpower of having three of me, I can handle like three branches at once. I don't know about the rest of you. I'm like, constantly, if I make a poor request, I'm like, it's too large. Okay, I'll split it in three. And here's like three four requests, three successive four requests that all combo into each other. And I'll go to my boss and it's like there's three of me and I'm like, me one is like, so here's the application layer. Cool, cool, cool. So that me too is like, here's a database, right? And me three is like, here's the front end, right? And that's, I can like, there's like a code encoding of having three separate streams or start for the one thing that you're making. And that's so cool that we have software that allows that mental model. A second favorite way to read things is things with semantic versioning or any versioning with a well-known scheme. Sematic versioning is the one that I know. I really like semantic versioning for the very specific thing of I like looking at how versions change over time to describe how, how the thing you're making is stable for respect to its consumers, stable internally, how you're thinking about it. Some of my favorite patterns are when I see two major updates in a row with no patches in between, you broke it and then you realize that you really broke it. When I see like, when I see one major update, 300 patches and then five beta versions, you broke it and then you wanted to make sure that you did not break the next one. It's going to break for someone, but like you took the time to really like tease out like this is going to break for only like 1% of people and they'll tell us it's okay. My favorite thing is that if I see, if I see patterns of one major version, five patches, another major version, that, that next major version is broken for 50% of people. Get the major plus the patch version. That's very specific. Like, so get the 3.01. 3.0 broken always. I don't care what you're making. You semantic version there. You doing it right? I can tell. And I can tell this over like large products and like large code bases. There's a big, there's a few big exceptions like React does, I think their major versions increment like every year ish, which might also be semantic versioning. I don't know. But when we get these patterns for describing how things will change over time, we can make better judgments about how our industry is changing. The last bullet is already what I talked about. Okay. The next thing I really like to read with is public versus private APIs. When I wrote this, the idea I have in my head is, so it's just public versus private APIs, but like the thought I have on reading this is like good shiny code versus like the inner workings and like plumbing of your code, the like crusty undercurrent. And I like that we have these iconomies, right? I like to be like, we have, we have whole repos we can go in and you can go in, you see like, here's the like cool top part of it. And like, here's the part where you should look at. It's like really nice. I made it just for you. It's version and everything. And here's the internals. We're like, there's like a like a recursive loop in here. Don't care about that. Don't care about that. It does what you want. I promise. There's like some big math in here. No, you just install stuff. It's okay. I swear. Right? And prior to software engineering, I really didn't have this pattern for like, like here's our coherent external face. It's really solid. And here's our internal face where like it's like, it's like the inside of us like a star. Everything's just like turning around and it's like messy, but it gets the job done really gets a job done. And I find, I find that when I find that it creates really nice patterns, when I want to look at my specific like public entry point. And then if I like, if ever I have like some public functionality that I know doesn't work for me, I think, okay, I'm going to walk it up to its internal to its internal private functioning, just like a few steps till I know how it's working enough to change it. And that's a pattern I can use in most repos. Like I'll be like, pip install isn't working. Okay, what's pip install actually doing? Private API blah, blah, blah, keep going. And that's a really nice pattern that I can like abstract across lots of code that I'm reading. Oh, yeah, my another favorite way I have to recode is infrastructure as code. This one's a recent one for me. Because really, and this is different because it's not like a lot of infrastructure as code stuff is is the difference between infrastructure as code and like the other thing is my coworker comes to me and they're like, okay, I set up the database. And I'm like, all right, cool, nice database. Three months later, maybe that coworker leaves or on vacation and I need to set up another one of those databases. And I'm like, writes, I go to the UI and I'm like, hey, heroku, give me a database. And heroku's like, I don't just give you a database like what are you talking about? In this moment, I realized my coworker telling me I click some buttons to create a database is probably a bit more nuanced. I don't know about the rest of you, but that's the problem infrastructure infrastructure as code really, really solves for me. I like to call it the the one thing that really helps you tell your coworkers how you're doing stuff or just other people in your industry, how you're doing stuff in a way that's repeatable and auditable, as opposed to just like, like, like I guess I could take a video of my coworker clicking buttons in a heroku UI. But I prefer infrastructure as code just personally. And it's really good for communicating like state changes to like, I've never been able to I've never been able to like actually diff any infrastructure that wasn't made with infrastructure as code like infrastructure stood up like with visual UIs. I like, I could say that you click the different button, but what that means actually is really hard to quantify. And so that's one of my favorite things about infrastructure as code. It's just one of these things where like there's times when you want to use it and there's times where you want to use it. But when you when you really, really need it when you have like 10 other people relying on infrastructure that you need it is a really cool boost in helping other people understand what it is you're doing. Oh, yeah, um, at first, this was search. And then I realized that really for me, it's grep, like grep specifically in this many arc and incarnations. When I when I'm like looking at other people like, and I'm like, yeah, you search for the code to do X. And when I realized the moment that they don't use grep regularly, I'm like, oh my gosh, look, let me tell you, you can search this code base for stuff. Like you can just put words in and then you find them in your code base. You would think that being able to find information isn't a hard problem. But I think the companies that are really popular want to road to tell you otherwise. Right. And so I spent a lot of time being like, you know what, I can find so much stuff. I am infrastructure team at my current job. And so I spent a lot of time like going through hundreds of repos and like, okay, I need to update all of these things like dozens and dozens of times. And it is this is a godsend. Like I don't know how I could do anything without the ability to search through code. And a lot of my work nowadays is like going through and finding one particular piece of information and finding out where it's repeated. And that's basically impossible without being able to like, go into a bunch of files and then grab them really fast and then like, put some really complex magic acts on like, what is like millions of lines of code. So I love this. Going to your command line, grab some stuff. It's not accessible, actually. You might have to read the document first. But but once you read it, you'll be able to find stuff. Super useful finding surface. Alright, next is my favorite ways to write stuff. Like, I don't even know what this section is about. Let me see. I write lots of things. I write code. I think this is about documentation. It is right. This one is about documentation. My favorite way to write documentation is automated documentation. Because this automated documentation tricks me in thinking that I'm not writing documentation. Because I sit around and tell myself, I'm like, oh my gosh, I'm not a good writer after I wrote three paragraphs about the code. I just like wrote. And I'm like, no, surely that's not documentation. And then the auto documentators is like pulling that in a markdown and like someone's reading it. And I'm like, Oh, I did write documentation. Amazing. Thank you. Thank you. I call it I call it tricking programmers into writing more as a service. I really like it. I really like it. And probably a minor bit here is that it removes just a little tiny bit of copy pasting from your life, which is like, you know, it's kind of nice. Like I like to do one less action in order to get my job done. But at the same time, you know what I really like doing copy pasting. I love copy pasting. I wrote from the internet here, but you all know what I mean. I mean from Stack Overflow specifically. I call. So I think this new pattern of calling package managers copy pasting from my community. I install your package on copy pasting from you. Thank you. Blask for letting me copy paste your whole repo. Repeatedly. It's really good. Really good. Copy pasting my coworkers all the time. We use this in a form of skeletons, skeleton projects to do copy pasting, you know, there's just lots of things where we're like, we're like the skeleton denormalization process and blah, blah, blah, like bulk updates and really it's just like complex words for the patterns of copy pasting. My favorite thing actually is copy pasting for myself. I'll do a thing while I write code that I know won't work at the current point in time that I'm working on something, but I'm like it might work in the future. And so at any point in time, I'm like sitting on like these these like all of these different bits of code. And I tell my coworkers like, yeah, so like if ever you need like functional tests for like this set of like the repo that like we never tested before, I have some functional tests in my back pocket copy pasting this repo and they're like, where did you get those from? And I was like, I don't know. I was just working on stuff. And I was like, you know what? I would want to copy paste in the future, some functional tests, you know, it's copy paste on database migrations, because I have so many copy pasted database migrate, you don't even know. The part is probably about leaving one job and go to the next job is you lose your database migrations copy paste from yourself even see hard times out there. My next favorite way to write things is with autocompletes. Autocomplete. I love autocomplete autocomplete is like the hey, can I get something with a D like a like, you know what I mean? Like the thing with the it stores information, especially for D database. Thank you, autocomplete for completing this very basic concept. I think this is good for my memory. I can just remember the concept of the thing where I'm like, you know, a a a synchronicity and it just finishes for me. Thank you. Thank you. They're like, it's like an abstraction between like, here's the knowledge in my head. And here's the concept that everyone else understands. And autocomplete like jumps me in between those two things. Sometimes they make me faster. Sometimes I spend time yelling at the autocomplete to read my mind better. This is how it is. But but overall, I really, really like it. Super cool. It's my favorite part of text editors. There's a lot I didn't make here because I didn't know what to put on it. We just like, you know what I really like? Just like text editors. I love to edit text. I just edit text all day. I read text and text editors. I edit it. This is so good. I love them. Write things. All right, cool. Next, set of favorite things on my favorite tools to build with. I love these things. I mean, I love all the rest of these things, but these things I also love. Postgres. I love freaking postgres so much. Thanks everyone who's ever worked on postgres. Like, I love y'all. Postgres is like, often I'll be in conversation and I'm like, yeah, like, can you just give me like a postgres as a replacement form for like, can you like spin up a database? That's how ubiquitous you all are right now. So thanks for that. And then also like, you know, down the line, I'm like, hey, like, I want to make this data like, like grass, like grass theory or something or like now it's a document store. Like, I don't know why, but I just really want to put documents in my postgres database. And unless you do that, that's amazing. I like it. I like that we have this like generic purpose database here. And also the fact that so many people use it means that like, I've actually never run into a postgres problem that someone else didn't already have, which kind of tells you how much I use postgres, which is not a ton, but, but it's super useful. Any tool that gets to this like, like so many people use it that you certainly can get help at bitpoint is like in a really amazing spot. We have, we have computers doing computer things here. I understand. I can help with this computer process. Okay, right. Computers are also a tool that I love just in general. All this talk is about computers, I realize. Round of applause for computers. Yes, I love those. I love those. Thank you. Thank you. My next favorite thing are package managers. I love these automated copy pastors. They're so good. They're so good. I love package managers when they get to the point where I can just be like, okay, so my story is the first time I, I'd use Linux for like a few years. I'd never used Mac. I switched to Mac. And I was like, I don't know, like Mac package manager. And then homebrew was like, here I am missing package manager. And I'm like, I appreciate that branding. But since I found it, it's not missing. But thank you. And I'm like, okay, I need to install my app. Right. Okay. I know I want a Postgres. And then I have homebrew here. Homebrew is a package manager. And I open a soap and seal line and I'm like, brew, install Postgres. And then it works. And I had a whole freaking database on my computer. I just like guessed some words online. And I was like, okay, maybe possibly these series of commands will give me an entire database. And then it did. I love that. I love that so much. I love that we have like tiny short series of words, maybe like three or four words. So like install the entire world on your computer. Like I can like go into my terminal and like get some tiny subset of Google's entire infrastructure just with this tiny package manager. Sometimes it takes a long time. Sometimes your computer's friends will heat up a little bit. So that really tells you you're doing some big stuff. You ever machine learned? That's your computer doing work. My other favorite thing. My other favorite thing. And this is more on the UX side rather than the scale side are a concept I I've seen called npx business cards. You should look that up later. I'm speaking to you right now. But I'll dance at home. Look this up. I love npx business cards. It's this thing where like, okay, so there's package managers all of them. They download some code and execute it, right? Well, there's some a few packs managers. Now npm does it. Yarn does it. I think Butler was working on a variant of this where they want to download some code and execute a specific script. That's npx. And then people saw that that they were like, you know what, I can have a CO live business card, which is really cool. I love the idea that like, you know, we just like, here's a little bit of my code. And it's just like information about me. That's websites also for the record. But but if you think about websites, but for terminals, that's what you have. And that's why I love package managers because they enable stuff like that. I love web micro frameworks. My story about web micro frameworks is cool guy Bob from the bike store. So I go to Bob from the back store. And I'm like, Hey, I just want a tire. He's like, what kind of tire? And I'm like, just give me a tire. And he's like, so like a tire for a bike. And I'm like, I guess it could be a bike size tire, it could be like a tractor. So I just, I want a tire. He's like, okay, just here's a general purpose tire for anything you might want to do with it, hang it from a tree. I don't know, you're going to make this chair. I don't know what you're going to do with it. Here's a tire, right? Like, I like that. You know, I like Bob from the back stores, the tiny micro framework who just like, I have a whole big thing here about the concept of micro frameworks, I don't actually mention any. So I mean, I guess I should do just do just be like, okay, like flask and Sinatra, like they just handle their like, I go up to the package manager and I'm like, hey, can you give me like just a web server? And they're like, what kind of web server? And I'm like, I just like the teeniest, I just need a web server so I can go back to doing my life. And that's web micro frameworks for you. I like it. I like it for when I have big ideas that I think don't fit into ideas that anyone else has ever had. They probably do, mind you. But but when I think I'm being super, super unique, micro frameworks are there to give me just a tire. By contrast, we have macro frameworks. Macro frameworks don't ever describe themselves like this. They just call themselves frameworks. Okay, friends. So web macro frameworks are like Alice, the contractor, the like plumbing contractor, where you're like, Hi, Alice, I flooded my, my like bathroom toilet and the bottom floor and it like flooded the whole bottom floor. And so like my half my house is unlivable. Can you work on that? And as is like, like the whole like, so I need to fix like half of your entire house. That's like basically your whole website, right? Like, so I go to PIP and I'm like, Hey, the Django, can you give me like most of the website? Just I'll add some bits to the end. But can you just give me like 90% of a website? And then I'll come in later and then fix it. And then the contractors like, I am skilled to do that. That's ridiculous that you need so much. But here you go. That's macro frameworks. They're really about like, like, you know what you want. And it's fits in a really tight like idea of things that other people want, which is cool. And so they can get you as close to your goal as you can really manage. They can really manage help you with a lot of stuff. People who work on macro frameworks, they have spent a lot of time looking at the decisions people make, because you have to make decisions about potentially millions of people. And everyone needs their basement and flooded. But like in a slightly different way. And that's, that's a lot. Thank you, macro frameworks for doing this for me. Alright, my other favorite thing to build with our error context, the slide went through many forms. At first it was tracebacks, then it was error codes, then it was tracebacks and error codes. Now it's error context. And it's about the idea of going from like how much you're describing something that just went wrong. I find it when I'm working on like say the like, I don't know what the concept is behind engine X, the web server, that layer. And it's just like 500. There was an error. And I'm like, thanks engine X. Very informative. And then you get to like the programming language layer. And when they start telling you like actual useful things, like there was an error here, your data was this, the TV was on, you probably buffer overflowed. Actually, that's a bad example, because I've never seen a program language. It's like, I might not remember it. I just die. Advice for programming language designers. But, but other errors describe really, really, really useful state that you can use to fix it. And I like to think of good error context is like a good therapist. When you need a good therapist, you know it. Like, you're like you just built something, you're about to launch it in two or three days. And you get this like, you get this like blood of 500s, like your slack challenge is just lit up with all of these alerts. You're like, Oh, no, I hate this was going on. You go into like the logs. And an error context is like, you didn't copy paste the one thing from here. And it's just gone. And you're like, Oh, thanks, I'm calm now. Thank you error context. This is very, very calming. Because I think good error context is like a good therapist. Oh, yeah. Okay. I love to build with containers. I only started building with them recently. I find it amazing that I can have my computer and also like hundreds of other people's computers inside my computer. Well, okay, not more like five. Five other computers inside my computer is really cool. I love I love this idea that I can just repeat other people's setups a whole bunch of times. I don't know how we got to this point that we have this amazing technology. But like, thank you. I'm also really amazed that like, I know about the whole idea of like, you can't run a docker in a docker. I get it. But really, that's black hole territory. You watch yourselves. But it's super cool. I love building with containers. I want to build everything in container now. It's amazing. All right. See, this is why I was confused about the documentation bit because this bit is about talking to computers. It's been a long time talking to computers. I have lots of like really flowy words for this like computer, computer. What's the word when you talk to snakes? Computer Paws with Tonger. I don't know, you know, computer split. I don't know, you know, all of these complex words I have. Really, I just don't let call myself a software engineer because that seems reductive. I'm like, I just I speak to I tell the computers my thoughts and wills nail baby, like 80% of the time. It's super good. And here are some of my favorite ways to do that. So I love talking to computers and Python. For the record, I have to say that Python is my literal mom. When I was a tiny baby of like one trans years old, Python was like how we save you by giving you ways to talk to people. Super good. I love you Python. Thanks you mom. I also like Python because it's very flexible, which is what the next point is about all my best terrible programming crimes come from Python. My best programming where people are like looking at and they're like, why would you? Why would you do that? Why does the programing languages do that? Like, why does you have three layers of abstract based classes? Like, why would you need that? You were just composing a string. Python is definitely the programming language for that. And usually it's very forgiving, gives you nice errors. Like the infinite loops I got myself into with Python usually it tells you just like you did this thing a million times and I'm like, oh, sorry. I did tell you to do that Python. Sorry. And I also like Python because it's really popular. My favorite thing that I don't, my favorite thing that I don't mention a whole lot is that this one time I was hanging out in a slack channel with a bunch of like Python like maintainers or something. And then they asked a question on stack overflow about Python. And I was like, but you're the maintainer. And then but the idea is that so many people use this language that like it's actually very useful to like touch the community animus and like see how everyone feels and how everyone's working and there's like so much stuff going on with Python right now that you can usually find someone that's doing something remotely close to what you're doing. It's super useful. I love it. Next, I like Ruby on Rails. I didn't know what to put on Rails on here because I have actually used a lot of Ruby not on Rails. But really what I recommend is the on Rails version of Ruby. It's pretty good. It's pretty good. It looks really nice. I spent a lot of I spent a lot of my time having like screenshots of my code written in Ruby. I would like write something in like Python or go and then I'll rewrite it in Ruby just to take a screenshot because it looks really, really, really nice. I love it. My other favorite thing about Ruby is the all of the like Ruby and Rails automation of things that just work. Ruby maintainers out there keep the Ruby magic. I love it. It sells me on your programming language. It looks super cool to read and it makes it work really well. And also writing in Ruby is the fastest way I've ever built a fully functional website. Like Rails, the Ruby macro framework is like I it's the most batteries included thing I've ever used and I love it so much. I love JavaScript. Specifically, I love JavaScript for learning asynchronous execution. I didn't know anything about like the concept of asynchronous execution before I started using JavaScript and JavaScript like beat me over the head with it. Repeatedly, they were like, you can't have synchronous UI. You just can't. I was like, I don't know what that means. And JavaScript made me learn. It's been getting better actually over the years, like the different like ES5, ES6, all of those have been getting better at helping people understand asynchronous execution. And so I think if you really are like, hey, how do I make two threads of work execute at the same time? In my opinion, JavaScript is a really great place to learn that. Also, it's really good if you want to learn how code could execute in a different context than you normally would. When I was writing in Python, I wrote a lot of scripts that were like it's a CLI script. It's like single threaded. You execute it on command line. It does a thing and then it talks to you. That's like it's very in and out. It's very basic. When I wrote in JavaScript, I had to like really understand how my code is executing. Like the concept of running my JS code on like the node V8 container in CLI, I had to wrap my head around that and the fact that that was actually identical to executing my code or kind of identical to executing my code in a browser. Like it makes JavaScript and its current connection makes you think about code and code execution differently, which is super valuable. On top of that, there's TypeScript, which my opinion of TypeScript is TypeScript is the you don't know JavaScript very well language. I went in this TypeScript and I was like, yeah, I'm going to like use like a transpiler for JavaScript because I know JavaScript now and TypeScript is like, no, you do not. You do not know what any of these types are. You know how to transfer how things are transferred between them. You don't know how the functions are called. You don't know anything. TypeScript taught me all of those things. I love it. This is extractable probably to any sort of like any language that's like not typed by default and you add like a types transpiler on top of it. It will probably teach you all of these things about the language that you didn't think about when you were working in the untyped version. And I think that's super valuable. Also like TypeScript specifically happens to be my favorite way to learn about type systems. It also happens to be the like the first way I learned about type systems. So like I'm biased because it was my first, but you should definitely tell me about other cool ways to learn about typing type systems, not like typing words like we do every day. I love rust, rust. My gosh, rust. Thank you friends for building the amazing borrower checker and all the memory management stuff. I go to war against it quite frequently. I gain like eight levels just fighting the borrower checker. I apologize for the for any like rust developers here, but I wouldn't I would recommend learning rust when you don't have a deadline. You don't have anything you need to get done. You just want to learn a whole lot, particularly about memory management. That's when the rest is really good for you. You go in with the borrower checker, you go like 10 rounds and you come back being like I understand memory exceptions. Rust is really good for that. And and when I came into it, I'd already knew a bunch of programming and so I was like, oh no, am I going to enter a community where everyone feels like they know everything. But the rest community had a really good community for like where everyone was like, we all don't know how to borrow checker works. It was I've seen multiple talks that are just like, like variants of fighting against the borrower checker, which are like rust version of like making you understand memory management, which is cool that it has this like pervasive learning design in this community. I like go laying. I like going because it would make you never show your clients an exception again. Mind you, this is fake. That is not true. You can say that you will probably show an exception, but you will dramatically, probably dramatically decrease the amount of exceptions you show your clients. Also going very specifically has the best experience with building binaries that I've ever seen. And every other language I was like, I need to build a binary. This is annoying. I don't I don't want to understand how binaries are good. I just want one and go language just like, okay, you just do the regular book process and here's a binary that everyone can run and I was like, and that's it. That was it. It was really cool. I loved it. Thanks for listening to my talk. Let's go laying. My favorite thing for the amazing binaries. We have the conclusion. In conclusion, some of my favorite things are all of you for listening to me. Thank you for listening to my talk. Definitely tell you about me, tell me about your favorite things, particularly if I didn't mention it. And also if I did mention it, tell your friends about your favorite things and have a really great time at the rest of the conference. We're going to take a 10 minute break. If you were speaking in the next session, especially if you haven't done an AV check yet, that's great. We're going to take a 10 minute break. If you were speaking in the next session, especially if you haven't done an AV check yet, please come up here and do it. See you in 10 minutes. We have seats reserved for you. Okay. Awesome. And so our first speaker for this session is Jennifer Wong, who is dressed awesomely, slight from Harry Potter and her talk is IMU's FTW Bang Bang, building IMU based gesture recognition. Excited to be here and I am so excited to talk to you about IMUs. I will tell you what they are. The back story behind this, like why am I dressed up this way? So in 2018 Halloween, a few months ago, I was like, I need a Halloween costume. I'm really excited. And you know what I'm really excited about? Magic, because I am a programmer and I can do magic. And Harry Potter is also magic. So maybe I can do a programming thing and a magic thing and a Harry Potter thing. So specifically when I was a kid, there was this game that I really liked, Harry Potter. And you would go around in this game and you would have a magic wand and you would wave your wand at things. And you could do these various gestures and you could make things float. You could stun things. You could unlock things. It was awesome. And I was like, I would really like to have this in real life. So I did it and it was awesome. And yeah, I really enjoyed it. And just to give a little bit of context, there were a few logistical constraints behind this project because I was like, thought about doing this like around early October. I had about two to three weekends and I was like, I want to spend less than $200 on this. So those were the constraints I was under. So I was like, okay, well, I can work with this. So let me go through like how to build like my gesture recognition wand. So when you do a gesture recognition wand, you have like three different components. You have your algorithms, which is software, and then you have your computer running your algorithms and then you have your sensor, which is also hardware. And so it's these three pieces combined. So first let's talk about the software aspect of the things like the algorithms. Cool. So how does kind of your standard gesture recognition algorithm work? So what happens is you get time series data. So the x-axis on this graph is basically time. And you just have a while loop that just runs forever. And it's just like eating this data and you're like, hmm, like I see this like slice of data. Is there a gesture here? Hmm, is there? No, there isn't. Hmm, is there data here? Nope, no gesture. And you just keep going and then you get here. And it's really exciting. There's a thing here and you run your same algorithm and you're like, yes, there is a gesture thing. Awesome. And then, you know, it's a while loop so you like keep going forever. So that's your really high level algorithm. Awesome. We know what our algorithm is going to be. Next, let's talk a little bit about sensors. So there are a lot of different sensors that you can use for a project like this. What I decided to use was an IMU, which stands for Initial Measurement Unit. It's basically an orientation sensor. It has an accelerometer, a gyroscope, and a magnetometer. And all you really have to know is it tracks like your rabbit being tracked via IMU. And it can detect like shakiness. It can detect your like direction of north. It can detect how you're like twisting things. It can detect a lot of awesome things. So I'm like, okay, IMUs, this seems awesome. How do I find an IMU? Because I'm doing a hardware project. Fortunately, there's this awesome website, Adafruit. And I went in and I typed in IMU. And it came up, here is an IMU you can use. And I'm like, awesome. This is great. There are libraries like this. Documentation is great. So now I know what sensor I'm going to use. And finally, what computer am I going to use? What am I going to use to connect all of this and make it all work? Well, so when you're doing these kinds of hardware electronics projects, there are two large categories of computers that you can choose. So on one hand, you can use embedded Linux. So this is your Raspberry Pi. And on the other hand, you can use microcontrollers, which are kind of Arduino-like systems. And these boards are different. And the main difference is your Raspberry Pi loves Linux. And I love Linux. And specifically, why do I love Linux? Well, because on the Raspberry Pi, I get my Python. I get my scikit-learn. So my standard machine learning stuff, I get my TensorFlow. Whereas on microcontrollers, the support for machine learning is a little bit less mature. You're like, hmm, maybe I can use micro-python. I've seen some hacks for getting TensorFlow to run on microcontrollers. But I have two to three weeks. And I do not have time to figure out how to do snake charming or parcel tongue on microcontroller. So I'm just going to go with a Raspberry Pi. Unfortunately, the standard Raspberry Pi is the size of two of my fists. And this is my wand. It's pretty small. And I'm like, I need a small Raspberry Pi. So I type into Google, small Raspberry Pi. And this Raspberry Pi comes up. And I'm like, awesome. I have a small Raspberry Pi. So cool. Now I know a rough algorithm. I have my small Raspberry Pi and I have my IMU that I looked up on Adafruit. Cool. Now that I have these pieces, I need to put them together. So the first part, like I just follow the tutorial and I connect my IMU to my Raspberry Pi because documentation is great and the communication protocol is UART. And I'm like, I need to attach this to a wand somehow. Actually, first, I need to like have a wand. And do I make it by hand? I actually, so I just went to Amazon and I googled like costume wand. And like this came up and I'm like, awesome. And I like ordered a wand. So I like took this and I like glued it to a wand. And I was like, yes, I'm done, except batteries are not included. I'm like, oh, I need a battery. So I like went and I bought a battery and I could like connected it to the Raspberry Pi. And then I was like, hmm, well, when I detect the gesture, like something needs to happen. Like, do I want my lights to turn on? Do I like want to like build this giant fan thing to like blow all my pillows and like have them float? I have two to three weeks. So I'm just going to play a sound effect. So I ended up buying like a speaker. So this is a speaker and I just connected over USB audio. So great. I finally have my thing and like how this, this is like a software engineer trying to do like mechanical diagrams. So this, so I basically, I have this wand, wand, and then I have the battery and one of the most amazing things I realized was after I ordered the battery, I was like, the battery is the diameter as the wand. So I just like glued them together and then I like hold it and it's great. And then on top of this I have the Raspberry Pi Zero and then I have the IMU. And I'm like, I need to attach this together. You know what's good at like tying things together? Hair ties. So I like tied it together with hair ties and it was great. And then finally you're like, okay, but like my speaker is huge. Like what do I do with this? Well, fortunately, my sleeves are like really big so I can just like thread it up my sleeve and the kids won't notice because all I have to do is impress the kids during Halloween. So this is, this is a picture of the completed product. You can see like a big picture at the left and on the right I was actually like, you know what's more awesome than normal hair ties? Clear hair ties. So like that's why like you can like see the nice Raspberry Pi and everything. Cool. And I was able to kind of have a successful Halloween which was super fun and I really enjoyed it. And you know, in our remaining time, I'd love to talk a little bit more about how to, you can do this yourself, how you can get into hardware prototyping. So for a long time, like I really wanted to do hardware prototyping but I couldn't get into it. And what I realized is there are certain mind shifts that you have to make in order to jump from software to hardware. So one of the things that I realized is that you needed to have a lot more discipline and you need to really kind of work on your planning skills because when you're doing software, you're like, yes hackathon, download all the things and when you're doing a hardware project, you can't be like, order all the things right now except you can. But you're still going to have to wait for shipping. And that goes into my next point which is one of the things I didn't realize doing hardware projects is you're going to be a lot faster if you just order all the things and then accept that you're only going to use half of them. This is kind of like your R&D budget. Like in software, your R&D budget is time and hardware, your R&D budget is also money. And it's just a lot more fun when there's no pressure so once I made this mind shift, it was a lot easier to kind of go into this. But another thing that also really helps with getting into hardware prototyping is having more reference projects and having more documentation. So in the light of documentation, I actually have all the software for this project on my GitHub software documentation and on the hardware side of things, one of the things I dislike about hardware projects is people are like, let's talk about the big pieces of it and you'll figure out what it means by yourself. And I was like, I don't like this like, so I made this Amazon wish list that has all the cables I used, all the parts I used, even like the super glue I used, so you can just go to this link. This link is also on the GitHub, so you don't have to copy it down. The GitHub also has a link to the slide deck, but you can just like order all the things and it includes all the cables. Unfortunately, it doesn't include all the tools. So you're going to need a soldering iron. It's not on that list, but we're just going to talk about it sometime. And there are so many other things I am excited about that I want to talk to you guys about. So like, please find me afterward or like email or tweet me if you would like to talk about any of these things or other things or like tools or basically all the things. So thank you so much. I really enjoy being here and looking forward to an awesome BingBang Conwest. Thank you so much, Jennifer. And she's actually giving out stickers of the I Love documentation that we're going to talk about. So our next speaker is Alex Rasmussen who is talking about EarthBound's almost turing-complete text system. It has awesome graphics. You know you want it. You could hear me the entire time, couldn't you? Okay. Can everybody hear me okay? Wonderful. Well, good morning everybody. I'm Alex and today I'm going to tell you about EarthBound and its almost turing-complete text system. So EarthBound is a video game that was released in the States in 1995 for the Super Nintendo which was Nintendo's second big kind of console generation for the home. And EarthBound is a role-playing game and it is one of the most beloved role-playing games of all time. And I could talk for hours about how great this game is but I'll have to just kind of cut myself short and say that in a lot of ways despite being charming and whimsical and just a wonderful game EarthBound looks a lot like a traditional role-playing game of that era. You control a protagonist whose name is Ness. Ness and his friends go on a quest through the world. They see interesting places and talk to interesting people. You see up there in the upper right talking to somebody. You also fight monsters. The combat system is down there and you get money from fighting monsters that you use to buy better weapons and armor to fight bigger monsters and that's kind of the core mechanic of the game. So in a lot of ways EarthBound looks like a traditional role-playing game but there's one part of its internals that's really strange and that's how it handles text. So you'll notice here there's a lot of text in these screenshots and you interact with text a lot of this. And in a lot of these sorts of systems you're dealing with text is a very isolated little thing it just draws some text on the screen but EarthBound's text system is different. EarthBound's text system actually forms part of an interpreter that's effectively embedded into the game itself and this interpreter has registers it has an instruction set it has a call stack remarkably sophisticated for a game that had to fit on I want to say an eight megabyte ROM cartridge. What's even cooler is that people on the internet have been looking at the guts of EarthBound and studying this interpreter and how it works now for almost 20 years the work on this started in the very early 2000s. So today I'm going to tell you a little bit kind of give you a whirlwind tour of how this little interpreter works. So EarthBound's register set is very modest it has three of them there's a four byte working register that's usually used for storing return values there's a four byte the documentation called an argumentary register that's used to pass arguments typically when you later on when you see things that take a hex byte as input if you set that byte to zero a lot of these instructions will just read whatever the contents of the argument register are and then a two byte secondary register and that's mostly used for loops for counters there are actually two sets of these three registers there's an active set and a storage set and there are special instructions in the ISA that will swap the active and storage sets between one another so let's actually talk about that ISA a little bit so I'm going to denote operations in EarthBound's instruction set in square brackets these instructions consist of two parts first part code this is typically one byte but it could be two that basically describes what operator you're dealing with at any given time and the rest of these arguments or these little numbers are all the kind of operands of this operation and this is a variable length instruction set so different operators can have different different numbers of arguments so now let's do a whirlwind tour of a very small subset of the things most simple thing is how to control text right so this is kind of what you would expect you can make line breaks you can stop parsing texts so that you don't buffer overflow you can halt the kind of display of text either with or without a little prompt this is kind of the very very basics that you would expect from a text system here's an example of what that looks like you'll notice here there are two lines that first line is terminated by a kind of a stop has a prompt so you can see when it loops again a little arrow down there and the second thing stops parsing and terminates so that's kind of the very basic thing you would want to do with a text box you can also support pretty basic boolean variables so there's a collection of several hundred of these event flags you can turn an event flag on or you can turn it off and then you can insert something into the text parsing routine that will jump to a particular if a particular event flag is high and this is used for things like if you speak to somebody multiple times the first time the event flags off the second time it's on stuff like that so you can do different responses depending on how many times you've talked to them it also supports branches and it supports jumping to a location and continuing parsing until you're told to stop with that stop operation or you can jump to a location keep parsing until you stop and jump back to where you were and continue parsing so in this way what you have is effectively a call stack just pretty cool there's also a multi-addressed jump that you can have any one of a number well not any one of a number you can do up to 255 of these pointers and based on the value of the working register you effectively pick the pointer in the list to jump to and jump to it there's also more kind of bizarre things so there's an instruction that will increase party member like X's HP by Y% which is pretty good if a monster hits you and you have to take damage or if you heal yourself and you gain life that way you can also set a party member's level to whatever stats they should be at for a particular level in the same instruction set and then you can display text graphics inside of the battle system code produces the smash icon that's literally all it does now another thing that this system controls is cut scenes so if you wanted to do a cut scene you want the kind of non-player characters to move around and do stuff so the first thing you want to do is make sure that the player can't move so there's a specific kind of operation right out of there are a couple of different tables that due to time I can't really go into but those tables will give you kind of a look up for what sprite you're talking about and then there's also a table of movement patterns just like go this many pixels to the right and then turn and go this many pixels down and so forth and you can assign movement patterns to sprites that way also if you want a sprite to just stay in one place and turn direction and keep in mind this is all with the same thing that does text and everything else so then there's the truly like the the kind of sisky part of the instruction set right which is the miscellaneous operations so so at some point in the game you can buy a bicycle which lets you kind of run around the world at a slightly faster speed and also he just is happy right in that bicycle doesn't he and there's a specific instruction to summon the bicycle but you can't do it while wearing pajamas so there's that there's also a camera guy that occasionally kind of descends from the sky to take your party's picture and kind of gives you like a slide show of pictures during the end credits you can summon the camera guy with an instruction also and you can teleport and there's a registered list of instructions that you can teleport to insert that instruction and you teleport so this is like a tenth of the instructions that we know about and people are still studying the internals of this interpreter trying to figure out what the rest of these things do and this link is actually they call it the control code lexicon that's like the list of all the control codes we know about some of them are just like we have no idea what the arguments are so you might ask why have people been looking at this game for so long trying to figure out how the inside of it works and the answer is that people have been using this game for a long time as the basis to make their own stuff so there's a website called pkhack that will give you a bunch of patches that if you through some means you can apply a patch to that earthbound ROM to make it either change aspects of the game or to make it a completely different game and if you're familiar with the game Undertale apparently the guy that wrote Undertale's first game was a mod to earthbound that he wrote when he was like 13 which I think is pretty cool also people have made things that make this a little bit more palatable there's a language called CC script that compiles that you can use to drop basically drop in and people have a whole bunch to copy paste thing earlier there's a whole bunch of like little scriptlets that people have built that you can drop into your game there's also a thing called coil snake that will let you edit just about everything in earthbound from the sprites to the maps and it also has a CC script dumper and a CC script editor thank you very much for listening and this is my contact information if you want to get a hold of me I'd love to talk about this stuff more thank you very much thank you so much Alex this was amazing and our next speaker is Brianne Bolland ETC services and ports and people which is really fun because did you everything that ports have people and ETC services do now you do she also has a really awesome background which is love speed dating kind of wonder where all the gloves came from hi Bang Bang Khan I'm going to talk to you about ETC services and how it is full of ports but also people I'm an SRE but even when I was a full stack engineer I often ended up in Linux guts and networking and that was how I came to find that ETC services has a lot of people in it I was troubleshooting something with a co-worker and I asked him how he knew so easily that a certain port was used by a certain ETC services and I dug it up and scrolled through the 13,000 plus lines that comprise it and asked him if he knew what all of these names and email addresses were about he did not so I finally got a chance to dig into it for the next 10-odd minutes we're going to talk about what ETC services is where it comes from and who controls it who these people are listed in this file I wrote to a bunch of them and then maybe most importantly how you can achieve to start we'll go some of the basics any Windows or Unix-based OS has a version of this file the Windows version is at C Windows System 32 drivers at C services the Unix version I will let you guess it is overseen by IANA the internet-assigned numbers authority and it's been influenced by some work by the IETF across the years here's a little snippet of it the most important thing about it it is human-readable and machine-usable and each entry is set up like this you have a service name you have the port and protocol pairing you have a little commented out alias and then the part that really struck me you have this line that's commented that has a name an email address and sometimes most wonderfully a date so up until August 2011 when you reserved a port you would automatically get both TCP and UDP for that port after the shift you now only request the protocols that you need and the rest are merely reserved but not assigned they're just kept for a rainy day in case port real-estate dwindles drastically if you go through the first 500-odd lines of Etsy services there are a lot of familiar faces SSH is assigned port 22 but it also has an author whose name is Tattoo Illenin his bio includes a lot of pretty normal information for someone who's listed this far up in this file he designed the SSH protocol but he also authored several articles that are familiar with Chipper at port 17219 I will condense a giant internet rabbit hole for you I wrote about it on my blog it was wonderful and it's very long it was one of two competing EPR schemes in the Netherlands across the 90s and into the aughts it did not win one called Chipnip1 but it was retired in 2015 but it's still in Etsy services just because someone claimed it and it hasn't been it is a professional multi-program transport stream software multiplexer the internet has some great diagrams to explain that further to you and then there's this one which only 90s kids will understand so let's take into ports a little bit more ports in their entirety are divided into three ranges which cuts up the real estate allowed by 0 to 65 535 the number allowed by a 16 bit number 0 to 1023 on a service on there it has to be root in the interest of servers being able to trust each other occasionally sometimes 1024 to 49151 are user ports or registered ports which is a lot of what we're talking about here today and then 49152 on up are private or dynamic ports and they are available for your ephemeral needs Etsy services gets used by these 5C library routines which are all just generally that makes use of it which makes these two commands synonymous you can give it the port specifically just yeah port 25 man or you can tell it SMTP it refers to Etsy services and it's like oh yeah 25 you got it but the one I suspect people in here have more likely used as a net stat and if you provided flags to give you service names that's where those names get pulled from but who are some of the people in this file I wrote to 288 of them to find out only about half of which immediately bounced these are from a while ago the first response I got delightfully was for the service that shares initials with me a big brother at port 1984 described by its author Sean McGuire as the first web-based systems he reserved a port in January 1999 after researching Ayanna's role in it took about 12 days and he described it as totally painless okay so I assembled my list of people to write to semi-intuitively took about 10 days kind of forgot the beginning by the time I got to the end which is how I got this response from the only person on earth who really has the rights to write that I wrote to Tim Berners-Lee I am pleased to say he was impeccably polite in his short response he recommended his book Weaving the Web for More Backgrounds and if you like reading about design decisions at an enormous scale it is riveting and I really recommend it a thing I really wanted to know about was whether getting into this file was exciting like did you feel legitimate if you did it Christian Trajox of Digivote at port 3223 described it and I also wanted if people had fun with it Barney Wolfe of Lupa worked a few different layers of meaning into his assignment I quote I picked 1212 because the telephone info number was area code 5551212 and Lupa in acronym for lookup phone access was a pun on my last name I don't know if my bosses at AT&T or anyone at IANA ever noticed that I heard from several other people I thought I'd have victory with like three responses I got closer to 20 people are lovely and the blog version of this talk includes a lot more commentary than I can fit in here I hope you check it out another thing I wanted to know about was how people knew what to do whether you're do this kind of work you're like oh of course obviously I see services the short version as ever I suggest reading the RFC but mostly people seem to have just a general awareness like one quote I was the network guy at the company so just you knew interestingly this is how people restrict themselves to port 80 and 443 because a lot of enterprise security software just goes like to all those lesser known ports and so if you actually want to get adopted you need to get through so yeah the RFC 6335 this has most of what you would need to request your own port the process is less common now but it is rigorously maintained down to an automated page that just shows stats on every month's worth of new as for where your version if it comes from on most unixes the version of SE services that you see it's a snapshot taken from Ayanna's official version at the time the OS was released incidentally I found that max are a good like 10 plus years behind so if you really want to be on top of it like go to Ayanna it's really interesting still but yes to the most important part how do you get in there there are more than 400 ports that are still unassigned this this is your visually not terribly interesting but very effective gateway to get into SE services you just need a service name the transport protocol of choice and your contact information and a description you could have your own port in a couple of weeks and then you could be hiding inside of your computer many years from now waiting to start on nosy nerds like me and people are absolutely doing this there have been a gob of port assignments already this year so if you are interested in this you can go to the local or not yet I urge you to read the RFC just as a general practice because it's usually really interesting and these things are so lovingly tended and then specifically here it is not too late to be immortal at SE services and I beseech you like hi I'm immortalized we know why our final speaker for this session is Michael Albaugh who is has an amazing top title it is wheels within whilst or possibly whilst within wheels how many people does it take to get people set up on AV many okay what did that mean a little bit of explanation that's a one-digit slice of Babbage's difference engine number two and the code that corresponds to it so I'm Michael Albaugh and I'm here to talk about emulation in a sense of computer emulating some other usually computer system and I'm about the same age as the stored program electronic digital computer yeah so I spent a lot of time at the computer history museum in Mountain View mostly as a volunteer not an artifact so while I was doing that one day someone asked me could the analytical engine emulate the difference engine now that wasn't completely out of the blue because I'd been demonstrating the difference engine and I'd written an analytical engine emulator for the IBM 1401 museum had to celebrate Ada Byron Ada Loveless whatever read some Regency novels everyone has five different names her 200th birthday so these are very different machines so the difference engine there is basically a stack of adding machines hooked to a type setter and its job was to produce mathematical tables doing successive values of seventh order polynomials approximated whatever function you wanted and the analytical engine was much more complex more ambitious and in many ways a precursor to the modern digital computer although supposedly Conrad Zusa who actually built a mechanical computer in 1936 had never heard of Babbage but so it was in the hope that this documented computer program by Ada that I had decided to use the fourth year one because I had it laying around so I went a little over and so I emulated the difference engine on the analytical engine on an IBM 14-01 on an old Mac laptop on a slightly newer Mac laptop and laying around and which led me to ask why do I have that laying around so because I have a hoarder including software and because that compresses a lot better than motorcycles and I've been doing emulation for quite a while my first relatively large program after an intro to computer programming class I decided to emulate the IBM 14-01 on an IBM 11-30 so the 14-01 is an interesting machine it's my third computer I ever met and it's the elder brother of the first computer I ever met and it is almost but not quite totally unlike any computer you can think of today and it's also interesting because it was cheap reliable had great print quality by the way cheap $2,500 a month rental for the base unit you see there the totally tricked out ones we have at the computer history museum were probably more like $16,000 a month rental and that's those prices relate that to you could buy a single family home in Sunnyvale for about $25,000 anyway the 11-30 was my first binary fixed word length machine and having met both of those in close proximity gave me a real appreciation for how different computers can be from each other there's a lot less heterogeneity among computers these days anyway so that emulation project was never finished and I lost that deck long ago but I've done a lot of other emulators since so that stack was actually more of a heap and I'm going to visit three of these in this talk starting toward the bottom so one reason we emulate is it's a business case it's purely business you just got a new computer you've got some old software you're depending on oh my god what am I going to do well nowadays you just keep the old computer but I mean back in the day $16,000 you didn't do that so and Apple did something similar when they moved from PowerPC to Intel they let you briefly run PowerPC programs on Intel Max and a lot of other people did that long before so for instance the 1401 IBM when they introduced the 360 they said oh well we'll let you for a fee emulate your 1401 code on it and that was enough to get people who might have considered going to some other vendor which we will not mention right and this can be carried too far I I got a panicky phone call in 2000 from a guy because the latest release of IBM's OS for the 360 successor of 4381 had dropped IBM 1401 emulation 28 years after the last 1401 okay so there's other reasons to emulate I assume at least some of you have run a game on your computer or phone or whatever the other thing is to get insight into old computers and to study their designs how understand what their users were doing and what their designers were thinking of a friend of mine called this computational necromancy and the same people of you will pay no attention to the orange side it is a rabbit hole too far okay so even if the older machine exists as an emulator to prepare stuff for running on it so instead of chilling with two thirds of the IBM 1401 running IBM 1401s in the world I can code in the comfort of my own basement using a keyboard that has a delete key so and that's time travel in comfort but we can also travel forward to emulate a system that doesn't exist yet or never exist right so Lynn Conway and Carver Mead wrote this book Intro to VLSI Systems and inspired a lot of people in the world including a bunch of us at Atari to design several chips and we dreamed up a 32 bit risk processor called the Atari simplified architecture processor that one got us to seek professional help in building it so I wrote an ASAP emulator to evaluate design tradeoffs and retarget GCC and Jim Coker the logic designer in Mississippi also wrote a different emulator and we traded test cases and some of the discrepancies were bugs in either the test cases emulator some of them were real ambiguities in the spec and it helps you know it takes time and money to spin a chip and if you have to ask how much you can't afford it so we did have a few mistakes in the first but surprisingly few and I was able to lie to GCC to get around that and it worked out really well and to give you an example of how important that can be so when I was trying to get Ada's program running on my emulation I had some issues and I asked Tim Robinson our local expert about it he says, oh yeah Ada ran across that and pointed me at some documentation about that and it occurred to me that if they'd had an emulator she could have showed Babbage what she felt was wrong and how to fix it and instead of just blowing her off they might have fixed it so unfortunately there's this chicken and egg situation it's a lot easier to build a computer when you already have one right so just to shout out to some of the people there Sydney Padua was a big source of information to me everybody should go buy her thrilling adventures of Loveless and Babbage and Tim Robinson a master of mechanical computing was our expert and that's it thanks everyone and that's the first round of speakers done yeah so we'll be having lunch right now there are various options for directory restrictions so just check those out and we'll be back here at about 1.25 p.m. for our next round of speakers if you are speaking in the next block please do come down and let me check if you haven't already thank you you guys wouldn't mind sorry if you folks wouldn't mind making your way back in from lunch we're going to go ahead and get started soon we have a few bits of administration to take care of the first thing is that if you haven't had a chance to sign in yet maybe you came a bit late and you went right into the talks or you just missed where you sign in we'd really love for you to find one of the organizers and actually get officially signed in we're trying to keep an accurate count of attendance and make sure everyone gets shirts if they ordered them so make sure to find someone in a red bang bang con shirt another thing is that we are starting a lost and found by these chairs over here I had to move my backpack I was like that's not lost much like homebrew it has been found but that is where you can start putting stuff that you find and if you've lost something you can look over there and see if it appears alright so before we get started on this section we wanted to take a moment to thank someone that's been really pivotable that's not a word she has also been pivotal in one of the parts of bang bang I think really helps with the accessibility and like both whether you have a hearing disability or a visual disability or anything like the white coat captioning is amazing I don't know how she gets all the technical terms right like so spot on even when we're talking about such esoteric parts of tech but Mirabai Knight has been doing an amazing job it's an actual work so she does such a fantastic job she's been doing this for bang bang con for quite some years now and if you have a conference and you would like to bring accessibility of this fashion of that conference we highly highly recommend white coat captioning alright so we're going to move on to our first speaker and I have to go on help how do you let others know what's going on when you're 8000 feet up in the air in a plane without an engine so we're going to start things low key after lunch alright hi everyone quick poll to start who knows who that guy on the right is nobody sorry what anyway that is Captain Chesty Sullenberger who landed so technically that's a glider that's not the type of glider I want to talk to you about though this is if you squint you can see me actually in the cockpit of that it's pretty fun so these things are called say planes and the first question I always get is how do you not instantly crash without an engine and if you've ever thrown a paper plane it's basically the same thing so you continuously exchange altitude for the speed that you need to fly and then again my personal record has been flying for 10 hours and 52 minutes but in a glider without an engine without landing in between how does that work I've found a great clip on YouTube actually that explains this so essentially you can see the stuff that's heating up the air and then the plane just circles in that heating up air and the air rises and the glider with it so in reality it looks more like this what we use is the sun basically it warms up the ground that heats up the air and the air rises and we circle inside of that and you can see several other planes in that so-called thermal climbing rhythm this video is from gliding competition from last year the way gliding competitions work is in the morning every pilot gets a task so start at the starting line go to turn point A turn point B turn point C and then to the finish line these tasks are usually 200, 300 miles you use several hours for that and then basically the pilot who finishes that task first or as fastest wins the competition day and we have about one or two weeks of competitions so in the end all the scores add up and the winner has the one with the most score gliding competitions have one major problem though for spectators on the grounds you don't really see a lot because you see the planes taking off and land but if they fly away 200 miles you don't really see them so my personal goal was to find solutions to this the early solutions was using radio calls but that doesn't work if you do it like every second so you do it like every half an hour to give you an idea where everyone is but it's not really that great and the precision is also not that good there's this other thing called short messages but it's like if you're texting it all the time it's just not good don't do it so with smartphones it got a lot better there are several apps now that can transmit your location data to servers on the ground you can use special apps for gliding you can use WhatsApp you can use Google maps whatever but they have several issues one of them is at high altitude you just don't have a great reception and things just don't arrive at the ground and the other thing it quickly drains your battery so that's not that good either there is another system though and that's using a device called Flarm which is short I think for Flarm a flight alarm it's essentially a collision avoidance system so the way it works is it transmits your GPS location to all nearby devices and by nearby I mean two or three miles roughly and that's enough to give collision avoidance data to all the other airplanes it gets more interesting when we use ground stations though because with proper antennas on the ground we can reach ranges up to 30-40 miles so you can see all the planes that are flying around your airport for example and it gets even more interesting if we link those up to the internet so now we can actually see gliders flying on the other side of the country on the map in the browser which is really awesome but it has one major problem so by the way this thing is called OGM it's short for Open Glider Network it has one major problem these servers only speak TCP and in a browser well, browsers do speak TCP but you don't get access to a raw TCP socket so that's not good so we had to build something that translates from TCP socket data TCP messages to something that we can use in the browser in this case we decided to use WebSockets because it's the easiest so what we built is called OGM WebGateway in the end it looks something like this I brought a screenshot I would show it live to you the problem is the flight season and Europe hasn't started yet and time zones also make it difficult so the map right now is pretty empty but this is roughly how it works so you can see all the planes flying around there there is an experimental 3D version too but I'm not going to show that for now so we built that in Rust actually and we used a framework called Actix it's describing itself as the most fun web framework I mean it's Rust and we've already heard about the borough checker it can be frustrating but it worked amazingly well so the way we've modeled it is we have the supervisor actor that's basically the main thing that does everything and that starts up three separate actors we have the OGM actor which is the one that connects to the OGM server network and receives messages from all the planes that are flying around we have the gateway actor in the middle that is responsible for basically the business logic forwarding to all the other web sockets and then we have the HTTP actor which is basically the server that accepts connections as I said the OGM actor receives messages from the OGM servers and forwards them to the gateway actor and then the HTTP actor it basically starts up a new actor for every connection that it receives and those actors are stateful so for web socket connections that's quite helpful and then these connection actors they register themselves with the gateway so the gateway knows if I am receiving a OGM message from the OGM actor I want to forward it to those connection actors and it's bidirectional because the web socket clients can actually filter what kind of updates they want so for specific aircraft for a certain geographical window etc so that is the live API we also have built in a history API that you can query to get the latest so basically the location data of the last 24 hours for several aircraft we first built that using Postgres with a timescale DB extension because I thought well this is a typical time series payload it turned out it was quite a problematic so at the busiest day last year we had about 20 million records in there for the last 24 hours at some point we started reaching query times of 20 seconds that was not so good so I had to rewrite that I talked to a few friends and they recommended to use Redis with a time bucket approach it took a few evenings and then I got it working and well it was a big success we are now at 20 milliseconds so that was good the other part of the project is the OGM web viewer so basically the front end to the back end which is the web gateway it's available at OGM cloud if you want to try it out as I said the map is probably pretty empty right now but it works pretty well it's written in ember and it has another cool feature and that is life scoring that you can see at the bottom so I think this is probably the first project that tried that we are now able with just that collision avoidance data to predict what the scores of the competition will be while the planes are still flying and this was really great I was helping a friend on the ground and working on the system in parallel and it was quite amazing to see everything basically worked out we were able to get the proper scores we were able to predict up to I think the second decimal point of the speed in the end so that was pretty cool yeah so that's all I have thank you everyone and if you want to know more about gliding or the things we've built let me know alright that was really fantastic I actually have a pilot's license and so that was like super exciting except I would never be in one without an engine so that's like living vicariously so next up we have Misty DeMeo and she'll be speaking on let's translate old video games how do they even work alright thank you thank you yeah so my name is Misty DeMeo I work on a website that you might have heard of and on a mac package manager you might use and that is not why I'm here today I'm here today because I play too many old video games so some of my favorite video games are an old series of Japanese role playing games called Lunar there were three of them released on the Sega Saturn between 1996 and 1998 two of those three were re-released on the PlayStation a few years later and those PlayStation versions were released in English but the problem for me is I'm a Sega fan girl I want to play them on the Saturn and I really want to play that third one which never came out in English and never came out for anything else so I decided I'm going to translate all three of these games into English so I can play them which is maybe a little aggressive but I'm getting there so let's start by finding some text but first I want to give a few acknowledgments the stuff that I'm presenting here today has been my own research but I've been working with a few other people on this project and so I wanted to make sure to acknowledge them for all of the work that they've done on this too so let's find some text unfortunately as you probably already know in coding everything's Unicode right it's 1996 everything is not Unicode Unicode wasn't really in common use in Japan until the early 2010s so we are not looking at Unicode here we're looking at a few different things so there are two common encodings that we could be looking at here ShiftGIS and EUCJP are the most common ones ShiftGIS used on DOS, Windows and Mac EUCJP is used on Windows systems there's also a third one called GISX0201 those first two encodings are multiple bytes like Unicode is which means that a single character can be a byte or two bytes GISX0201 is only a single byte but that means it can't represent all of the characters used in Japanese day to day Japanese has several thousand characters and GISX0201 can represent up to 256 also some people decided to just throw caution to the wind and make up their own encodings so there's that as well so my usual approach when I'm looking at something like this is just to take a look at a line of dialogue and divide that down into something small to see if I can find it this is a screenshot from Magic School Lunar which was released in 1997 so that first word there which means something like this year is three characters and I'm going to take that in Unicode and figure out what it is in the encodings that we're looking at right off the bat I know that it cannot be GISX0201 because it includes characters that don't exist in that encoding so pretty good guess it's not going to be that ShiftGIS and EUCJP in each of those it's going to be six bytes those bytes are completely different from each other so to search for it I use a special version of grep called binary grep it lets you specify hex data to search for instead of text and then it lets you find those inside of files so if we're lucky we might find a hit for one of these and luckily this time we did this game uses ShiftGIS a mostly standard version of ShiftGIS so the bit I've selected there is the word that we picked out if you read ShiftGIS and hex which I'm hoping you don't the rest of this the follows is actually the rest of that dialogue box so in this game it was actually really easy to find it and the game also happened to contain an uppercase English font as a part of this so we can try replacing some text just to see what happens now I said it was almost ShiftGIS they decided to simplify things by deciding it's no longer a variable width encoding it's always two bytes and luckily every character that exists in the one byte region of ShiftGIS also has a version of itself which exists in the two byte region so you don't actually lose the ability to represent any particular characters by saying you're not going to use one byte this makes the text rendering routine a little bit easier means that instead of having to read one character and then figure out if you have to read another byte to get the rest of the character you can always read two bytes as a part of your text rendering loop every time so that one was actually pretty easy let's try something just a little bit harder so this is from one of the other games called Lunar Silver Star Story it was released in 1996 so the first word here is Aresu or Alex which is the main character's name this word can actually be represented in all three of the encodings that we're looking at here but if we look at the rest of this dialogue we can see that it actually does contain some characters that aren't in GIS X 0201 so let's rule that out again and say that it's probably going to be ShiftGIS or EUCJP and let's try binary grepping for it and we get nothing, no hits so now we have to try and find something a little bit harder here encodings are fake actually at this point we could either be looking at compressed text or we could be looking at a custom encoding of some kind because I don't hate my life I'm going to assume it's a custom encoding and not compression because that's easier to figure out so let's take a look at the font this is a 16 by 16 font which means every letter in the game's font is 16 pixels by 16 pixels with four colors and so we've got here a grid showing the first few dozen characters in this font and if we take a look right near the end of that first line and the beginning of the second line we have the three characters that I pulled out earlier when I was showing in order so the order of the stuff that we have here it's not based on any actual real Japanese encoding that exists but it was probably generated out of the game's script files in some way because the order of a lot of the early characters in this font is actually based on the order in which some of the characters in the game are used in some of the script files so that gives a hint that we might be looking at something where instead of a font or instead of an encoding we're looking at something a little bit simpler so these are at indices 15, 16 and 17 or hex bytes F, 10 and 11 or if we assume that each of these is going to be two bytes the same way that we saw the text was earlier it'll be the six bytes that we've got down there if we try searching for that we find it so this game is not using a real encoding instead the script is actually written using raw indexes into the font table which is a little hard to work with but at least we found it so again I tried replacing some of the text in it with other things I found in the font just as an experiment with the indexes in the game's font and seeing does the index of that font show up if I replace that letter here and thankfully it did actually work but why would you do that it seems a bit messy it can actually be a little more efficient to read it this way than something real so instead of reading two bytes looking it up in a table then fetching the tile you can skip that middle step if you're mapping between a letter and something in your font you just have a direct one-to-one mapping and since a lot of times back then people were trying to optimize every single instruction they possibly could out of a pretty slow CPU it might actually have felt worthwhile to eliminate one lookup for every character so thank you thanks for listening to me Blather about weird old video games this is my twitter and my github I talk about old weird games on my twitter and the tools that I've written for this project are on my github and if you come chat with me afterwards I will talk to you about old video games this is a promise alright thank you that felt like a fantastic understatement they're using raw indexes right into the font table it made it a little more difficult alright so coming up next we have Eric Fisher and he will be speaking on if then else it had to be invented so we're going to learn I think the origins of something we're all probably using multiple times every day when we code but not really giving much thought about it well said thank you can you hear me hi I'm Eric and I'm here to talk about what seems like kind of an absurd idea that if then else had to be invented if then else is how we talk about conditions in programming languages if something is true then do a thing else do a different thing that's just English right except that it isn't I can't use else as a conjunction in normal speech only in programming languages so where did this else come from it's too microscopic a detail to have made it into any books on the history of programming languages it doesn't seem to have come from any of these sort of pre-computer sources or readings or algorithms as they were written before there were computers you find if yes and if no and if however but not else the first computer to be able to perform different instructions depending on the result of a previous calculation seems to have been the ENIAC in 1946 Haskell Currie and Willow Wyatt wrote this report describing a program to invert a function they used the name discrimination for the facility of making a decision based on which of two numbers was larger the ENIAC didn't have an instruction called discriminate though it was programmed by wires and dials on plug boards and the control panel for the instruction that made a calculation for a decision was connected by physical wires to the instructions that could follow it but soon computers began to have enough memory that programs could be stored in memory rather than wired together instead of a physical sequence of instructions there was a numerical sequence and a few special instructions could cause the computer to jump to a different point in the sequence here for example is the conditional jump instruction from one of the first commercially produced computers in 1949 it checked whether the last number calculated was negative and if so it diverted the flow of control to some specified location and memory otherwise it let control continue to the next instruction this was meant for a specific usage pattern if you wanted to do a task say 10 times your program counted how many times it had done it so far and then subtracted 10 if the result was negative the task wasn't done yet so it jumped back to do another round this idea was carried forward into the first higher level programming languages like Howcome Lanning and Niels Euler's language for whirlwind their conditional jump instruction worked the same way except that the numbers steps of the program were algebraic expressions not single machine instructions the first really popular programming language FORTRAN generalized this idea by specifying jumps to three locations at once depending on whether a calculation was negative zero or positive and gave it the name if the three-way if was arguably more powerful than just checking for negative but was probably more confusing because it meant that every decision caused a discontinuity in the control flow instead of programmers only having to think about the normal case that continued on and the unusual case that jumped FluMatic which was Grace Murray Hopper's predecessor to COBOL made the three-way if a little easier to think about by talking about comparing two numbers rather than about the signs of numbers and it also introduced the name otherwise for the case where the comparison wasn't what you were looking for which is sort of heading in the direction of else all of these programming languages were associated with one particular computer from one particular manufacturer but in 1958 two American and German computing organizations began a joint project to develop a standard machine independent language one that would be natural one where it would be natural for people to talk about rather than natural to implement on some particular machine each of these groups brought a draft proposal to their joint conference the German authors made two big conceptual leaps in their reformulation of the if statement the first was to let the conditional be controlled by any Boolean expression rather than giving special priority to the less than equal greater than form the second big leap was that instead of causing an abrupt jump in the control flow their if statement only caused the flow to be temporarily diverted at the end of the condition whether the condition had been true or false the program would resume at the always statement at the end of the block the only difference would be whether or not the subsidiary statements had been performed the example on the screen shows two ifs as part of a single block and you might think that the second one was intended to work as an else if does now but they actually expected all the conditions to be evaluated even if one of them had already been found to be true so the statements controlled by the first so if the statements controlled by the first changed the value that the second comparison depended upon both blocks of statements might execute where the German proposal did have something like else was in a second entirely different conditional form called case unlike case as we now know it their case was another way of writing Boolean expressions but with the statements set apart in a separate block from the statements they controlled it sounded like they thought this form would be easier for more complicated comparisons and here is the first appearance of the word else for the case where none of the other cases apply why did they call it else? they don't say what they do say what they do say was that this document was originally written in German and hastily translated into English so I think that a carefully chosen German word was probably translated as an archaic English word and then never revisited unfortunately we don't have the original German text to consult the American proposal also had the idea of controlling statements by Boolean expressions it didn't even have an if keyword instead of an expression was followed by an arrow the expression controlled the statements that followed the arrow the notation doesn't make it obvious but unlike the German block of ifs in the American form if one expression in a block evaluated to true it did short circuit and skip the following expressions like else if does now the two organizations held a joint meeting and merged the proposal into a single document that they called the international algebraic language before long the language would be renamed to Algol the if statement in the combined document looks a lot like the German proposal but it eliminates the always statement to end the block instead each if stands alone and if you want multiple statements to be controlled by the same expression you can use begin and end to group them together there's nothing like else in this formulation of if however Algol 58 also had a second conditional form called if either in this form you could say or if to do the same thing that we now do with else if but there's nothing equivalent to else by itself without another if the story gets really muddy the next year when several papers about Algol were presented at a conference one of them was the paper where John Backus who had previously led the Fortran project introduced the idea of formal grammar or programming languages Algol in his description of it doesn't have the if either conditional form instead it has a keyword called converge which makes the top-level ifs inside its block behave like else ifs it's not clear whether this was meant as an idea for a better if either or whether it was an earlier idea that had actually already been rejected if either either way if either would soon be replaced in 1957 John McCarthy at MIT had had written a proposal for the project that became Lisp another extremely old programming language that still survives today his proposal introduced the idea of the conditional expression as opposed to the conditional statement the important distinction is that an expression must always have a value while a statement can simply not be performed if the condition that it controls is true so his if function always ends with a clause called otherwise that gives the value of the expression when none of the other conditions evaluated to true and finally McCarthy's conditional expressions inspired Klaus Samuelson to clean up and unify Algol's two separate conditional models at the end of 1959 he eliminated the entire if either form leaving only the plain if but added a clause called else that would be performed when the expression controlling the corresponding if had been false you could chain conditions together with else if just like the previous if either had allowed with or if but you could also use else by itself for a final set of statements that would run if none of the conditions had been true if then else was the single conditional form that then appeared in the Algol 60 report the next year and is the form that almost all subsequent programming languages have followed although it seems clear to us now it was hard for people in 1960 to think about and the report spends a page and a half explaining how it works including this arrow diagram as I mentioned before the the the use of else is a conjunction sound strange Christopher Straykey's CPL programming language is the grandparent of C and therefore the ancestor of most current programming languages and he refused to use else calling it ignorantly incorrect English he thought we should write test then or instead which doesn't sound very natural and didn't catch on the mad programming language also went its own way it was known for its extremely long keywords and in addition to using whenever instead of if it used otherwise instead of else and whenever instead of else if it is worth noting in the indentation in the bad manuals example it took years before most other programming languages adopted this style of indenting conditions even though it now seems unimaginable to do it any other way but even while CPL and mad shunned the word else they kept the form and only changed the vocabulary language designers still search for the perfect way to write a for loop but the quest for the perfect conditional form seems to have ended in 1959 thank you class semelson for giving us a tool to think with and a word to puzzle over thank you all for listening and next up we have Catherine Berry and she'll be speaking about keeping abandoned watches ticking so if we do end up with any watches in the lost and found over there we will know what to do with them or else we would not it's the callback hey there I'm going to talk about keeping pebble devices alive many years after the original company had died so first off what was pebble pebble made smart watches similar to Apple's Apple Watch or Google's Wear OS but a few years earlier they were originally funded on Kickstarter gaining a more hacker focused community it also happens that I once worked at this company which is perhaps related to how I ended up here so what happened to pebble well it ran out of money and had to shut down Fitbit bought the remnants of the company and promised to keep things going for a while but it was pretty clear that everything would eventually fall apart but because pebble had gained something of a hacker focus and because it had a different philosophy about what a smartwatch should be it had amassed a somewhat impressive community of users and developers and they weren't willing to just let it die developers began congregating in what had once been known as the pebble developers discord server and schemed to keep it alive there was reverse engineering for pebble app store mirroring of all of pebble's websites and plots underway to reinvent the web services and even build custom firmware that could still be updated and to run on the watches some people went as far as working on new hardware and some of me here so a year went by the original shutdown date came and went and Fitbit silently kept pebble's services running because it was more effort to not do that as things kept running people slowly lost interest in most parts of this rebirth project and then Fitbit announced that pebble was finally going to go down on July 1st of that year and people should buy Fitbit instead unsurprisingly a lot of people were just as thrilled by that option in 2018 as they had been in 2017 but unfortunately most of the work had never actually gotten done so what would we lose if we just let everything disappear as it turns out pebble relied on a whole host of web services which meant a lot of stuff was set to break when the servers turned off pebble was an app store which you also needed if you wanted to keep having the apps you already had once your watch reset there were language packs to switch languages dictation to talk to your watch a web app, timeline pins and so forth not all of this would necessarily actually break or at least not in ways that were immediately noticeable because Fitbit updated the apps once more to enable a limited degree of offline functionality but in practice that really only got you as far as being able to turn the watches on and a lot of other things were still going to break this is where I quite accidentally became an active member of the story instead of just having been watching for most of the time one of the services that pebble ran was an open source web-based IDE called Cloud Pebble which I had originally built before they hired me using the old official Cloud Pebble account which for some reason I still controlled I tweeted an offer to keep Cloud Pebble running which wouldn't be terribly hard in theory because it was an open source program that I had written in the first place however this was broadly misinterpreted as meaning all of Pebble's cloud services and I must admit in retrospect I can see why but I was loathe to disappoint everyone and so a few days later alongside some still active rebelers like Joshua Wise and iShotJR whose name I had to ask about pronouncing we announced for rebel web services and also for legal reasons like the Alliance LLC a real company that actually exists but even if we were to re-implement everything how could we make anything use it in the absence of anything else the apps will still keep trying to talk to the server that no longer exists conveniently as a debugging tool the Pebble apps used what we called the boot server to look up the URL of every service they would ever hit and you can point the app anywhere you like even more conveniently the apps had a custom URL scheme and if you hit that they would switch over to use your boot server of choice we originally built this so we could switch to the staging environment but it turns out to be super useful in general and also a critical security flaw so how do we go about building a bunch of APIs to make all these apps happy we reverse engineered them and spent a lot of time staring at MITM proxy poking at the app to try and figure out what it was doing because I had mostly forgotten in the two years since I had last touched any of this and of course once we've built things we have to run them somewhere we don't really know how many users we'll have we don't know how we're going to pay for it and we are a bunch of clueless software engineers who don't really know how to run reliable services so what are we going to do we'll fall back on the buzzword of the week so Rebel Web Services for the most part runs on AWS Lambda serves its files from S3 and has no servers that I have to think about which is pretty great for me another exciting question we had while trying to build these things is to figure out what our scale might be I figure there probably aren't that many active pebble users around and they'll only use us if they happen to hear about us from Reddit or something I think about 1000 tops seems easy enough well then there were some news articles and some more news articles and even more and this one is a diva to text site 100,000 users then this is how many we actually have today so having built our infrastructure to serve 1,000 users we actually have 100,000 users but it was okay our serverless servers were running up so we had a whole bunch of our concurrent non-instances and everything was good this worked so well in fact that I didn't even realize it had happened until I checked about 2 months later how do we pay for all of it we're running a whole bunch of our non-servers and have to keep databases up somewhere and that's before we even start thinking about API costs we want to ensure this keeps running for as long as possible as long as anyone wants to use it which means we don't want to be dependent on for instance paying out of our own pocket or making everyone pay so step one make a budget part of which you can see here I figured roughly what we would cost per user across all our services with a whole bunch of variables for changing assumptions most of which are numbers I plucked out of the air with no real justification all of them of course assumed orders of magnitude fewer users than we actually had but servers turn out not to matter the real costs would be API access assuming 100,000 users that would be $220,000 a year to Google for dictation and $520,000 a year to IBM for weather paying three quarters of a million dollars a year out of pocket is not sustainable to say the least I don't have that much money but it does perhaps suggest a strategy for a sustainable solution we'll make you pay if you want to use the APIs and only if you want to use the APIs using our budget and making more wild assumptions I figured that if I charged users $3 a month for weather and dictation the minority of users who paid for it would cover their own costs and also subsidize the other users it turns out I was right so that was good actually requiring you have a subscription because Pebble didn't tell us who you were when it looked up for weather or when you sent dictation because it didn't care, but we do we can however control the URLs for those services so I guess we'll just stuff the authentication into the URLs but this is a pain too because the boot server doesn't have authentication either so it's time for crazy hacks what we're instead going to do is stuff authentication into the boot URL we give the app since we need to make you click a magic link on your phone anyway, we make you log in first and stuff a magic link in there so if you have your authentication token in your URLs which then sticks it into other URLs some of which actually have to go in DNS because we couldn't change the path which is extra fun and also super secure but it all works so there's plenty more I could say but I don't have the time to say it so find me later if you'd like to talk more about this but I'd like to wrap up by mentioning just a few of the people who continue to make this work whose names I will mostly not try for announcing between them and many others they are handling tasks like administration firmware and even hardware development interface design work supporting the many users we unexpectedly had and even implementing new services like we got a PR two weeks ago so I haven't actually built yet so that's all I have and I forgot to give a website so bear with this awesome that's fantastic also I will never look at APIs the same way again that's quite more expensive than I would have guessed if I were making my fake budgets but cool so we've reached the end of this little section of speakers and there's actually going to be a coffee snack break so coffee is going to be out where the food was previously over on the registration desk we are going to have pistachios that were processed in a peanut free facility and tangerines and be back in here for the next section shout it out if you know it otherwise I'm going to 235 if you are one of the speakers in the next section if you could come forward before break and do a tech test that would be fantastic release a year later and that was the first fully computer animated film and that so you can say that everything that touches in Toy Story is a triangle just so we're all on the same page when I talk about computer graphics I just mean images that are drawn or created images or videos that are drawn or created by computers and rendering is just the verb we use to describe the process of computers drawing so because I don't have a computer science background I had to learn a lot on the job when I started working on map rendering and one of the first things I learned was that everything that appears on the screen when the map shows up is at its most basic form a triangle in fact with the exception of points and single pixel wide lines triangles are the only thing that most modern graphics hardware can draw now of course when you play a video game explore an interactive map or watch a beautifully animated Pixar film you don't see triangles and that's because there have been a few decades of research by very smart people into different techniques for how to use triangles, math and physics to render rich dynamic and even photorealistic genes with computers in fact there are dozens of subspecialties in computer graphics that bridge the studies of mechanics, optics light, color, design, art computational geometry, electrical engineering and more so if any of those things are interesting to you I highly recommend learning a little bit more about graphics you might like it ok so why do computers render mostly triangles I didn't really know the answer to this question myself for a while after I started working as a graphics engineer professionally and it turns out triangles became a standard in the graphics industry because of the need for drawing to be very fast and because of some convenient geometric properties triangles have that make drawing them a lot easier I wish that I had done the quality of archival research that Eric Fisher did to actually figure out when triangles were decided upon but I do not know I did some light googling but that's it ok so before we think about what makes triangles a good shape for computers to draw it's helpful to understand what actually needs to happen for a computer to render an image so the software on your computer needs to tell the display which color to paint each and every pixel it controls usually there is a special buffer of RAM sometimes it's on a graphics card like a GPU sometimes it's not it's just in your regular memory called a frame buffer and that contains a full frame worth of pixel data and is used to drive the display monitor projector whatever is being used to as the graphical output of your computer a frame buffer is really just a memory buffer you can think of it as an array that has red, green and blue color values for each pixel in the display some frame buffers have alpha or opacity values too but that's not sent to the display it's pre-multiplied with the red, green and blue values to give you the final color that you see now in order to provide a smooth user experience graphical applications like a browser or illustrator things like that need and video games need to render 60 frames per second so however a computer wants to populate the frame buffer it's going to have to do it very, very quickly to figure out what color should be assigned to each pixel the graphics engine needs to project everything being drawn into two-dimensional screen coordinates and evaluate all of the pixels in the frame buffer as either inside or outside of the objects being drawn and finally assign a color to all pixels based on the features they fall within and the programmers instructions and this part of the rendering pipeline is called rasterization okay back to triangles what makes triangles a great fit for these kinds of operations the computer has to execute to make images appear on your screen and why did the architects of modern computer graphics choose triangles instead of some other shape like quadrilaterals, circles or arbitrary polygons well from a mathematical standpoint triangles are a bunch of cool properties that make them particularly convenient to draw so you probably know that a triangle is a polygon with three vertices it's the most simple polygon there is and three vertices actually also the minimum number of vertices you need to cover any two-dimensional area the interior of a polygon is also extremely well defined mathematically and it needs to be convex which are both properties that make algorithms for determining whether a point is inside or outside of a triangle extremely fast and deterministic also it turns out that all polygons and polyhedrons can be a polyhedron is a three-dimensional polygon like flat-sided shape can be exactly represented by a finite set of triangles and similarly arbitrary surfaces in three-dimensional space like this teapot can be approximated by a triangle mesh so triangles can help us represent virtually any two or three-dimensional geometry that we want to draw computer hardware and computer software could be designed to draw other shapes but the math for determining whether a point is inside or outside of a triangle is much more quickly calculated and easily optimized than point in polygon algorithms for arbitrary polygons and choosing one type of geometric primitive ultimately allows every part of the rendering pipeline and software hardware system to be intensively optimized for a single operation which as you all probably know makes things a lot faster and easier so in computer graphics context where speed is not as important like animated films are centered on massive networks of computers called render farms where frames take much longer than 16 milliseconds to render those kinds of use cases often use quad meshes instead of triangle meshes to represent their characters and objects but even though they're animating the quads in three-dimensional space before they get drawn by the GPU so even though they have more flexibility they don't need as fast performance they can use quads and deal with those drawbacks with speed ultimately GPUs are still designed to draw triangles so they're triangulated at the last second before they're drawn by the computer another property of triangles that is important for rendering is the fact that interpolation between vertices over the surface is well-defined and it's defined by a special coordinate system called barycentric coordinates barycentric coordinates are a way to describe any point on triangle surface as a weighted combination of three vertices that linear interpolates the values of vertices over the triangles interior being able to interpolate in a well-defined way is useful for computer graphics because it allows you to specify vertex-specific attributes which we denote by a underscore there that can then be interpolated across the surface of the triangle for each interior pixel to determine the final color value the most common attributes are colors texture coordinates which I don't have time to talk about and normal vectors so here's a triangle with three different colors assigned to the different vertices and the shading, the resulting shading from interpolating across the surface with barycentric coordinates within the interior of the triangle the barycentric coordinates always add up to one so you can think of like alpha the alpha coefficient on vertex one beta on vertex two gamma on vertex three and interestingly this is one of the cool geometric properties of triangles the intersection point of the three medians of a triangle which are the lines that intersect with the vertex and the midpoint of the opposite segment the barycentric coordinates are equal that's the center of mass if a triangle is a physical object and so the color at that pixel is a third of each of the colors summed up all together and the math involved in describing improving barycentric coordinate system is very cool but I don't have time and I'm also not qualified to go into more detail on that last triangle property I want to talk about is that all vertices of a triangle are guaranteed to be coplanar and one of the reasons this is important is because 3D computer graphics uses simulated lighting to enhance rendering when objects rendered on a two-dimensional screen appear to be three-dimensional without light and shadows three-dimensional objects look flat like this how much is surface illuminated or shadowed depends on its orientation with respect to the light source the brightness of a point depends on the angle between the light source and the normal vector of the surface which is the perpendicular vector triangles are planar so the surface tangent is constant and the normal vector will be constant over the surface as well so it's important and when you render a triangle with a lighting component that is based on the surface normal of a triangle you get this kind of fasted look which isn't actually desirable if you're trying to visualize smooth surfaces but it's still important because if you take that in combination with the ability to interpolate smoothly over the surface of a triangle you can assign an attribute to each vertex of a triangle as the vertex normal and interpolate that over the surface of the triangle to create an illusion of a smooth surface with your shading code so to recap triangles are great and some reasons why include that they can define or approximate all of the shapes and surfaces they have a well defined interior guaranteed to be coplanar you can linearly interpolate across their surface and most importantly they allow you to be able to optimize for a single shape um why is this not thank you and if you'd like to learn more 3JS is really cool to play around with some cool tutorials and there's lots of resources online for learning computer graphics it's not that hard you can do it and it's really fun so thank you next we have Jerry and he will talk about fast code by removing all branches awesome okay that sounds like it's working I'm already mirrored so let's not worry about this nothing can go there top okay hi I'm Jerry I want to actually confess to terrible code crime and that's what this talk is basically about the crime I want to talk about was committed in 2005 when I was working on migrating some delphi serialization code to .NET 1 and the constraint on that was to try and make it at least as fast as the delphi code because we had a real time system basically the electricity grid in the eastern seaboard of Australia dependent on this working now first a problem with serialization is that reflection needs to be fast enough to actually get the job done basic reflection in .NET is pretty slow 1000 times slower than just accessing the property directly don't worry about reading all of the text I've highlighted the most interesting bits and I'll share the slides after the talk if I can so basically to get to a reasonable performance for the reflection itself we had to get to about a factor 10 difference or so first approach I had to basically create code gen fast reflection myself because there were no libraries available yet off the shelf that could do that so I took a fairly basic approach I basically created some classes for each type that I wanted to do reflection on at runtime code gen which are basically big switch statements that can return the values of properties this was close but not quite good enough yet it was only 20 times slower than direct access but I needed to get to about 10 to be fast enough the slowest component in this switch statement is the cast some object that is over there so ideally I want to get rid of that cast unfortunately that's not valid C sharp code but when you're doing code gen you're admitting the cast explicitly yourself and I thought what if I just don't turns out that works brilliantly and it's exactly the performance I needed to get this going the downside is if you throw the wrong kind of object type into this you actually do crash the .NET VM itself so sharp tool be very careful now the next thing then having solved the reflection problem is how to make serialization fast whereas all of the performance problems basically serialization is just looping over the properties on types and figuring out what to do with them there's a lot of complex decisions involved figuring out whether you need to skip the property figuring out what the name of the property is that you need to serialize out property types and this is only a few of the decisions there were way more in the actual code than I can show here on the slide quick aside .NET has got this thing called delegates think of them basically as strongly typed function pointers they're function pointers with parameter types included what I did in my first step is rather than directly making those decisions and writing out the values I created a level of abstraction I created a processor concept which takes an input object and writes to a binary writer output and I created a factory method that made all of the decisions and returns a delegate that does the actual writing so this doesn't do the actual serialization yet but it's the first step towards it how you don't use that to get the same effect as before is you loop through all the properties you create the processors for them and you invoke them obviously that is not on performance when yet this is just as slow as where we started so the interesting thing is that you can split these two parts of the operation you can first do all of the property processor creations and throw them all in the list just throw all of the delegates in the list and then for the execution you just loop through the list and call all of the methods and you don't have to do it at the same time you can create another processor for the whole type as a whole which does that looping through all of the processors and executing them and create a cache for them against the type so now when I use that bottom line there however many times I want where I want to serialize the type out to my output stream only the first time all of the decisions get made everything after that is just straight method calls one after the other no decisions involved a bit more abstraction now both serializations are equal binary serialization is fairly straightforward when you're doing XML you also need to worry about converting things to strings so extra level of abstraction here I basically introduce the getter concept more delegates basically get a property from this object then you can create factory methods for that as well for integers for booleans whatever then you create a second delegate concept for transforms transforming types of values from one type to another you can get through factory methods as well there's a lot of factories involved and a lot of lookups involved but none of that matters if you're doing the lookups only once so that's not actually a performance concern and when you then want to do XML serialization you basically extend the string getter create string getter factory if the property that you want to serialize is already a string good just return the same thing as before just a straight getter on the reflection all is done if it's not the case there's a magic here because I didn't have time to talk through all of it but you can use some slow reflection tricks to create these delegates with parameterized types at runtime so think of T as the actual native type of the property here it doesn't matter it's only temporarily relevant while we get the getter and then it transforms from that actual type to string and then I mesh them together convert around the getter and hey I've got a string getter so the good thing is that then for XML serialization everything looks like a string it doesn't matter everything is a string interesting side effect when you start thinking of serialization as a composition of getters, transforms and processors is why should we just use the hard coded ones that I provided myself so at that point I also introduced an interface on the serialization concept to inject transforms for example so you can set them globally by type by property whatever and the cool thing is it doesn't cost you anything these decisions get made once and everything is already a transform in vocation anyway so who cares that you're adding this flexibility in it the only cost that you pay is for the actual logic that you want to execute to do the transform in whatever form you want which is the cost that you can't avoid anyway now it's obviously a few more complex properties that I also want to talk about if you have a property that's either an object or a collection you can do the same kind of thing so we create property processors for those object properties but the factories work a little bit differently starting with the object processor if you have an object property like nested or could be anything there in the example basically fairly straightforward you return a processor again and what the processor does it gets the object property and then it creates a type processor for the type of the item that you found and you invoke that remember that those are all cached so it doesn't matter it happens only once after that it's just method invocations all the way down but what's even more cool is the sealed concept that TopNet has that's the equivalent of making a type uninheritable at that point what I know about the nested property there is it's always going to be a sum object it can't be anything else so I can exploit that in my serialization I can do the type processor lookup once and return a delegate that just invokes it on whatever is in the property because I know it will always be a sum object so now I'm not even doing the type lookup anymore even more interestingly you can do the same for the collections if I have a collection of something that is of a fixed type then I can get that type processor outside of the loop as well and just run it over all of the elements in the collection with wild abandon and at that point I basically don't have any decisions left once I do my serialization every decision gets made exactly once for every property where necessary and the rest of the execution is just as fast as it can be now I did lie a little bit at the start this isn't quite the same as the Delphi serialization that we started from for obvious reasons Delphi has runtime reflection value properties, object properties, collection properties but for .NET what we actually implemented what I built instead was on top of that ability to do property skipping property renames because we wanted to change the object model between this transition as well doing type transformations as you saw before a whole bunch of other niggly things besides despite all of that the .NET serialization was actually twice as fast as the Delphi one that we started with which had hand tuned assembly in it so I felt pretty good about that the principle I think is actually more broadly applicable than just to serialization the idea here is you're basically looking for complex conditionals or really wild trees of conditionals in your logic, in your loops and you want to look for conditionals that are dependent on inputs that are fairly stable over the execution well they have to be completely stable over the execution but that can be for the duration of program execution maybe a TCP session or significantly lengthy algorithm and what you then do is you split those decisions into a create processor kind of thing and the rest is an execute processor kind of thing and then you cache all of those create processor steps and then you do all of the decisions exactly once now final thought why didn't we do more code gen because we already had code gen for the reflection to be fast enough turns out that code gen is very hard to get right you can very easily break your system in very non-trivial and hard to debug ways besides which we had already beaten Delphi performance so I wasn't really incentivized to look further but on top of that at the time that all of this was happening this serialization was outperforming this IO and network IO so there was literally no way to get more data out there anyway that's pretty much 10 minutes it's wonderful if you want to know anything more just find me here or hit me up on that handle so our next speaker is Max and they will be talking about making blackout poetry with computers okay so hi there I'm Max I'm going to be talking about making blackout poetry with computers so before I talk about that I should probably talk about what blackout poetry actually is so for those of you who might not be familiar necessarily blackout poetry looks like this or this is at least one way that it looks and this is sort of newspaper blackout poetry specifically this was created by Austin Cleon and Austin Cleon has a blog and a book about newspaper blackout poetry that are sort of specifically I think credited with popularizing blackout poetry to many of the audiences that are familiar with it today so you can see here that basically what he's done is he's taken these pages of newspapers and he's gone in and he's sort of like blacked out scribbled over most of the text in these newspapers and he's left behind just a few words so you can sort of read these out for yourselves I like them they're often kind of short and funny and sort of punchy this is the sort of style that he prefers I'm going to read out a couple here so after a few years in the world every person on the planet is an archeological artifact and then also we've got one that sort of describes my state of mind last night when I was too excited about this talk to really sleep 24 tabs open and no idea of the time so this one really like speaks to my soul here and you can get a sense of why I like this so much there's a sense of black poetry as well and one of the things that I really like about black poetry is that it can sort of be read as a form of creative subversion of a source text so on the left we've got a page from a humamant by Tom Phillips so that might be pronounced a humamant I'm not sure it's a human document and what he's done here is he's taken sort of a second hand book and he's gone through and for each of these hundreds of pages basically scribbled out over it removed most of the word drawn in his own custom illustrations over a lot of it picked these weird colors to sort of scribble the source text so dreadful that photograph posed I dare say of the dark work at Smyrna and what is this the works at Bucharest going silently on I have no idea what that means or what it has to do with the source text probably nothing but it's got sort of this interesting like reinterpretation of the source text where it's just got some words in common and that's it there's no other real significance to it and then on the right hand side we've got something that I generated with the tool I'm talking about today which is from a neural network sort of machine learning blog post I believe these times this neural light is a facade and sort of a well-placed warning regarding certain emerging technologies here I think and so what you can see here is basically even though the author's intent was to sort of like maybe even talk up or like in favor of these sort of neural network technologies you can take blackout poetry and use it as a way to sort of like subvert that meaning or to sort of do something that the author would never intend with those words and one more thing that I've encouraged people to do is to when someone powerful does a bad post do a bad poetry to it with the magic of computers so if someone in venture capital for instance has made a bad blog post or if the New York Times has made maybe posted a bad opinion piece you can make some really great blackout poetry that sort of like reveals sort of the hidden biases in this sort of uses the words against the author stuff like that so another thing I really like about blackout poetry is that it's kind of a subtractive form of procedural generation if you implement it on a computer and what this means is that a lot of the time when we look at procedural generation we think of it as sort of cumulative process where you're sort of building out something or taking a small seed and expanding it out you see this in like text generation for Twitter bots you see this in like sort of level generation for Minecraft or other kinds of games but there's other forms of creativity that are more subtractive or transformative of the input and one of these is sculpture so you can sort of think of blackout poetry as being a form of like sculpture with marble but for text and on the right you've got something that is called the days left for boatings and water so Liza Daly implemented this for national novel generation month which is exactly what it sounds like when she was a blackout poetry generator that she ran on something by Mary Wollstonecraft Shelley that produced pages like this and so this is sort of the algorithm that I'm adapting in my own work in this case so getting around to that implementing a blackout poetry generator and this is something that I've already done you can go look at this web page and it will have sort of the browser book mark that you can download and press the button to turn a web page into blackout poetry so one thing that you could do is you could do blackout random words which something called the deletionist already exists and does this which is not exactly the sort of like structural stuff that I enjoy about the examples I showed you already so instead we're going to do something that's more sort of natural language processing heavy we're going to take a source input which is in this case sort of ascendance from my blog space and science fiction frequently plays the role of the final frontier we're going to go through this and we're going to say for each of these what part of speech is it and there's a lot of libraries that do this I use something called POSJS for part of speech JavaScript but you can there's any number of tools that let you do this and then we can take that we can sort of like expand it out to a whole bunch of copies of it and you can do some fuzzy sequence matching on these copies so for each of these I'm creating a matcher a matcher is a little object that has sort of like a pattern that it's looking for of parts of speech so the one up top is looking for a noun followed by an adverb followed by a verb followed by a determiner followed by an adjective followed by a noun and the one below that is looking for a noun followed by a verb followed by a determiner followed by a noun and so on and so forth and they can also have crucially between these words other words that it's just ignoring and those are the words that get blacked out in the sort of output if this matcher wins so what we do is we go in we sort of say to each of these matchers here's the first word do you want this word you want to match this word it's a noun space so all of them could say yes randomly two of them flip a coin and say no but the rest of them say yes so they keep that word and then we move on to the next word and we do the same thing and we do it over and over and over again until we're sort of at the end of the text and we get sort of five distinct readings of this text space frequently plays the final frontier science plays the role fiction plays the final front or fiction plays the frontier or space frequently plays all of what you're interesting and then this fourth one here has sort of failed to match anything it got space it got plays it got the and then hit the end of the text and can't match anything more so at that point you're sort of left with a bad output that you can just sort of throw out okay so that's cool and then there's also the grammar thing that the matters have to keep track of because it turns out that some verbs and some are some nouns are incompatible with one another you need to worry about like the count of your noun versus the count of your verb the pronoun I needs to be handled specially because you can't say I is something like that and then also like the articles and you care about like what the first letter of the word that follows it is going to be okay so that's all well and good generation is fine and we produce some really cool stuff with it we can produce stuff like this which is one of my favorite outputs ever the mission is the infinite variability that poetry can speak I think this is like really personally beautiful right like I love this a lot but it's also a beautiful lie and the reason it's a lie is because these are two separate paragraphs and the computer has no idea that these two paragraphs are related at all the generator looks at each paragraph and tries to turn it into a separate piece of blackout poetry basically so we've got actually two sentences here the mission is the infinite variability which is okay I guess and that poetry can speak which is I think not actually even grammatical so the really great thing here is that because these two things are juxtaposed even though the computer has no idea they're juxtaposed it produces a better work of poetry overall than the computer itself can actually do so what I really want and what I really found myself doing more was sort of applying a human filter to the stuff that was coming out of this generator and being like okay it's produced this stuff for me I'm going to use this as raw material for my own sort of poetry creation so here's these are all from a single source text basically there's six distinct entries from a single source text what I did was I sort of rerun the generator on this text over and over again you can see on the left that these are all instances of what the generator produces when run on the same paragraph so a virtual reality is a language a world is a language a world is a way and these are all variations of the same underlying text but you have to press the button over and over again sort of like randomize it over and over again to get these so what the tool is not doing for me is really giving me the ability to like explore the space in the way that I find myself doing more and more so what I really want to move towards in the future is sort of empowering users to talk back to the source text like giving users more of a creative choice basically and sort of letting them do this do things like this to Reddit comments so what I'm actually doing is I'm implementing now a mixed initiative application tool and that will be at that same URL eventually when I get it working which I wanted to do for this talk but which is not happening so what I really also want to do is to enable users to create by reacting so the creative process is made of decisions right there's a whole bunch of decisions some of them are consequential some of them are totally inconsequential and it's really easy to get hung up on the inconsequential ones so what you really want to do and what I really want to do here specifically is to sort of like narrow down the decision space of the user with like a set of options they can pick between and each of those options is a good option basically so let's start with something like this we've got some text here this is from Kate Compton's blog we black out everything on it we sort of have these three lavender sort of things that are highlighted when you hover over one of these it's a possible word that you can pick for the first word of your poem and you can click on one to lock it in so we have right here we have the bell here we like the bell for some reason so we click at the lock it in we get three more and we can keep on doing this indefinitely until we get to the end and we have produced a poem like basically we're repeating this loop until we are a poet and at any stage you only have to worry about three distinct possibilities you don't have to worry about the whole poem and you can also always go back to a previous step if you want to so basically what I want to do is to sort of use this generative process to give players or users a way to like explore the possibility space of what they could create explore the decision space without being overwhelmed by the size of it and so due to implement this is going to be sort of exhaustively searching poem space my old algorithm is very probabilistic I can miss a lot of possibilities this is some javascript console output from when I was like killing my browser by making it loop and sort of search the whole space on one of my blog posts and then also I'm going to start using something called ASP which is answer set programming which is basically a form of constraint logic programming to more naturally encode some of the rules that we're looking at for like matching sort of sentences within these larger texts so this one we have like the example of the big moose which is utter gibberish but it works for this example and you can see that each of these has a part of speech each of these has like a position in the sentence and each of these has like a sort of count or compatibility with other words so we write these rules that are like basically like logic rules looking for a way to express the constraints in such a way that we can find subsets of larger texts that look like the kind of thing we're trying to find basically okay so take aways from all of this number one blackout poetry is cool number two subtractive procedural generation is cool turning generators into mixed initiative creative text is cool or creative tools is cool enabling users to create by reacting is cool this thing that I've created already is cool it'll have a better thing there in the future but we're not worried about that right now and thank you that was cool the last speaker of this block is Michael and Michael will talk about generating fractals with SQL queries can you hear me cool so as part of my day job I lead the database team at heap and so in order to do my job effectively so for my day job I have to optimize a petabyte scale postgres cluster and so in order to do my job effectively I need to be super familiar with the ins and outs of SQL and all the different kinds of tricks you can do so I'm going to show you how you can use some of the techniques I'm familiar with to generate fractals with SQL so just a brief overview of SQL so in SQL you operate on list of rows which you can also call a table this table represents a list of books so in a table you have some set of fields a title author so for books every book has a title author and cost and you can see for each book the corresponding values of those fields and so the most basic operation you perform on a table is a select statement which is give me these fields from the table so in this case we're asking for the title and author from the books table and we get the title and author of every book now an operation I'm going to be using a ton throughout this presentation because it makes SQL queries a lot more legible is a CTE which is a way of defining variables in SQL so in this case we're defining the we're defining Dan Brown books to be the result of running the query that gets all books authored by Dan Brown and then you can use this you can use Dan Brown books as if it were an actual table later in the query so below we select the number of Dan Brown books by using Dan Brown books which refers to the results of the query from before so now that we've gone through a few basics of SQL let's let's take a look at how we may go about generating a fractal with SQL so Serpinski's triangle is a very well known fractal and one really interesting way that you can generate Serpinski's triangle is with bitwise and where if you take a grid and you mark all the points where the bitwise and of the row number and column number is zero this will give you Serpinski's triangle surprisingly that's what this example here shows so in order to write a SQL query that does this we can do this in a three step process first write a SQL query that produces one row for each cell of the grid and then go over that and mark all the cells with a bitwise and a zero and then concatenate all the strings together to form the final fractal so now we can generate the cells using this query here we're using two different postgres features generate series and cross drawing generate series gives you a list of numbers so we call it once to get the row numbers and once to get the column numbers and then cross drawing takes the results of two queries and generates every pair of values between the two queries so in this case we're using cross drawing to get every pair of row number and column number which gives us our grid now for marking the points we can use this feature case when which is like if else in any other programming language and so in this case we're using when the bitwise and of the row number column number's not zero we mark the point with a space and otherwise we mark it with two asterisks and you can see below that we have marked all the rows with a bitwise and a zero with two asterisks and then we can use this last little query here to combine all the results together it works in two steps first by combining all the cells in each row together and using the string egg function and that uses string egg again to combine all the rows into a single string which is our result and so if you run this query it's a 13 line SQL query not too long you get Serpinski's triangle and so the next question you may ask is is there any way that we can reuse the same idea to generate other fractals and so if you take a look at the query we had before there's this little part here where we use bitwise and to determine whether or not the point should be part of the fractal and if you think about it we can actually replace this with an arbitrary predicate so for a given fractal we have some way of determining whether or not each point should be part of the fractal we can put that predicate here and we'll then have a query that will generate the given fractal so a well-known fractal that has this property is the Mandelbrot set so you can test whether or not a point is part of the Mandelbrot set by repeatedly running this formula and if this formula does not diverge to infinity the point is part of the Mandelbrot set so in order to express this definition SQL we need some way to express iteration which you can do with this Postgres feature known as recursive CTEs so for a recursive CTE you specify an initial query and a recursive query Postgres will first run the initial query and then take the results of that and feed that into the recursive query it will then take the results of the recursive query and feed that back into the recursive query and keep doing that until the results of the recursive query returns no results so as an example we can use this to generate the numbers 1 through 5 the recursive query is select 1 which gives us the row 1 and then the recursive query is select i plus 1 as long as i is less than 5 so you take the result from the initial query 1 and feed that into the recursive query that gets 2 and then 3 and then 4 and then 5 and then since 5 isn't less than 5 the query returns no results and you get the numbers 1 through 5 so we can use this idea to test whether or not a point is part of the Mandelbrot set we repeatedly run that formula from before many times and then check whether or not the point diverges or not so we can then modify our query from before but we replace the condition of bitwise and with testing for each point whether or not it diverges to infinity and so if you now run this query you get the Mandelbrot set so now the really neat thing here is that there's actually a whole bunch of different fractals that use the same general approach known as escape time fractals which are all based on running a recurrence or formula over each point and using that to determine what value whether or not the point is part of the fractal so one example is this fractal from the Julia set it looks something like this uses a very similar formula to what we just used we change our SQL query like this and we get the fractal so there are a whole bunch of different fractals that you can generate using the same approach I don't have time to go through them but so this one's like relatively simple SQL query can actually generate a whole bunch of different fractals so in addition to the escape time fractals there's a whole bunch of other kinds of fractals too so let's see if we can generate another class of fractals one interesting class are self-similar fractals which are described by taking an initial pattern and repeatedly applying some rule to it to generate the next version of the pattern so in this case we have the Hilbert curve which is a path that initially starts in the bottom left goes to top left top right and then bottom right and you generate the next iteration of the Hilbert curve by replacing each of the vertices with a u-shaped curve so for the second iteration it still goes from the bottom left to top left to top right to bottom right but you now have a u in where all the vertices were before and you repeatedly do this and it gives you the next iteration of the Hilbert curve so in order to generate this with SQL we need some way that we can represent this and there's actually a really interesting way that you can represent self-similar fractals known as L-systems which are a way of describing a self-similar fractal as a string so you start with an initial string that describes the initial iteration of the fractal and you have some rule for modifying that string to produce the next iteration of the fractal in this case for a Hilbert curve the initial string is A and to generate the next iteration with the corresponding string you replace all the occurrences of B with the corresponding string so if you run through the first few times you get this sequence of strings and then this string actually describes a path that produces the Hilbert curve where if you interpret all the f's as moving forward all the pluses as moving right and all the minuses as turning to the left this path will give you the path of the Hilbert curve so now, really crazy idea let's try to write a SQL query that processes L-systems so if we were able to write such a SQL query we'd be able to feed in any L-system we want and get the corresponding fractal so in order to do this we could do this in a two-step process first write a SQL query that runs the L-system and then a second query that converts the L-system string into the fractal so writing a query to run the L-system is surprisingly straightforward so this recursive CTE where the initial query is the iteration 0 of the L-system and then for the recursive query we run the string replacement so in this case we're able to use this Postgres function to replace all the a's with the corresponding characters and all the b's with the corresponding characters and so as you can see this generates the iterations of the L-system now converting the string into the fractal is a lot more difficult we can do this with a three-step process first by converting the string into a set of line segments by producing following the path then converting the line segments into marked points and then concatenate all those points together similar as to what we did before so here's a CTE that calculates the line segments I don't have time to go through it but the the rough idea is that it uses a recursive CTE to iterate through the string it maintains a current line segment and current direction and based on what character it is what character it sees it will update the state so if it sees an F it will take a step in the current direction and emit a new line segment if it sees a plus or minus it will change the current direction to turn to the left or to the right and then for marking the points we can use this query here it works by checking whether the point is an endpoint of some line segment if so it marks it with a star if the point is a midpoint of some line segment it marks it with a bar or dash based on whether it's a horizontal or vertical line segment otherwise it marks it with a space the same query from before to concatenate all this together and form the final fractal so this is what the full query looks like it's a 50 line SQL query so depending on how you think about it that's either a very large SQL query or a very small program and when you run it you get the Hilbert Curve now as I mentioned before we can not only get the Hilbert Curve but we can feed in any L system we want and get the corresponding fractal so the Dragon Curve is a very well known self similar fractal if you've read Jurassic Park it is in all the section beginning of the sections and we can just change the replacement rules and we get the fractal and then same thing here's another fractal board and again it has its own L system which we can just plug in and produce the corresponding fractal of SQL so using just a few simple SQL tricks we're able to write queries that can just change slightly and produce tons of different fractals we have unconferencing now and so what Janice did was she took a picture of the unconferencing sign-up sheet and typed them up and so we're going to project that to this screen in a bit and we would like everyone to be back by 4.10 so that gives us about 40 minutes for unconferencing and if there are any unconferencing sessions that didn't have a location written on the thing please go to the unconferencing sign-up sheet that was over there outside so you can find your people so another thing is there were some ideas that were outside of the conference hours such as I think there was like a rock-climbing one for today for those I would say consult the sheet at the end of the conference to see where or who to contact for that so thank you speaking at the next block please come here for the AV check thank you and the conference organizers please come to the podium if everybody could get settled we're starting our next session so welcome back to bang bang con this is the last session of the day I hope you had great time on conferencing we have three great talks lined up for this last session we will start with Ori who is going to talk about microwaves yeah so I'm here to talk about microwaves specifically smart microwaves so when I say smart microwaves I'm sure many of you think internet of things and if you're charitable you think of internet of things that way if you're less charitable you may think of it that way so I'm more in the second camp where I'm not a big fan of adding complexity to devices gratuitously so why am I working on this well because I think we're actually doing something kind of interesting our devices can actually direct heat inside the microwave if you've ever tried to cook things in a microwave you probably haven't tried raw chicken because it comes out terribly this is an example you can't see it the temperatures reading off the yellow wires are temperature probes but the lower one is reading around 95 degrees the upper ones are reading around 30 degrees not food safe do not eat in our device we can actually cook a piece of chicken fairly evenly consistently keeping it juicy and fairly tasty and we can do that every time so how does that work oh and also we can cook two different foods to two different temperatures so this graph forgot the important part so this graph is actually showing cooking run of salmon and asparagus the salmon is the set of blue lines and we wanted it to be targeting around minimum 65 degrees maximum 70 degrees the asparagus is the red lines as you can see it's less grouped less tightly grouped that's because asparagus has lots of thin parts spiky parts and so on so by nature it's going to be harder to get the control we want but we did a pretty good job here so how does this work well before I go into that I'm going to quickly review how a microwave actually works in a microwave you've got a few major parts you've got the magnetron which actually makes the microwaves that makes things get hot you've got the wave guide which takes the microwaves moves them over to the antenna and the antenna emits the microwaves into the cavity once they get into the cavity they bounce around and create a standing wave pattern which creates stable hot and cold spots and that's why you get uneven cooking in a microwave so our device is basically the same thing we've got some competitors and they're trying to do this with solid state microwave technology but we're not because it's expensive it's around a dollar a watt so for a regular microwave you'd be looking at at least $1500 just for the emission not even counting any of the other parts what we did was we built this top module board and that board essentially is each one of these paddle things is an antenna the antennas can create different standing wave patterns in the cavity so that looks like this I'll give you a quick overview of what you're looking at before I play the video what you're looking at is a fiberglass board with a bunch of LEDs on it I'm sure you couldn't tell each LED has a has a little antenna attached the antenna picks up the microwave radiation and lights up the LED if there's power on it this isn't a great visualization for two reasons first off when you take the board out the interaction with the microwaves means that the hot spots completely move around so the hot spot that shows up with the board is not the hot spot that you get without the board and the second one is that the LEDs are somewhat non-linear which means that the LED brightness doesn't map very well to the temperature that you get but it still makes a really cool demo and gives you an idea of what sort of thing we do so here's the video as you can see we get fairly divergent with hot spots and cold spots and what can we do with this we want to start doing function fitting so if you think back to calculus class you may remember Fourier series and these are not exactly what we use but they're the inspiration so if you add up you can create any arbitrary function using a linear basis set of other functions in this case sine waves a single sine wave you get well a sine wave something that looks kind of like your goal which in this example is a square wave add up 15 of them you get something that's looking pretty close add up an infinite amount and you get that shape well we don't actually have a sine wave generator so how do we deal with that well the reason that this works is because the basis functions that you use for a Fourier series Chebyshev polynomials Taylor series or any of these other similar things form a linearly independent basis set we can kind of break things up into and that's because in a linearly independent basis set in a vector space well we can kind of break this food up into a set of chunks and treat that as a vector so we can start to kind of say well we've got these function things can we kind of massage them filter them down to a basis set or a basis ish set so we have them we pick the most useful ones and normally we'd have more than three but I've only got so much space on the slides so so we take the average rise in temperature versus rise in time and shove those into vectors then we take and now that we've got them in vectors we can add them up like regular vectors and operate on them like regular vectors and now we have a target temperature we have a set of rise in temperature versus rise in time vectors how do we solve for finding the ones that we should use and for how long all we do is toss it into the non-negatively squares algorithm non-negatively squares essentially takes the expression A x minus y and finds the value of x for which we have the smallest result of that expression so in this equation A represents the set of all resultant vectors the d-temp by dt vectors that we were talking about x is the amount of time we spend in each configuration and y is our target temperature so here's an example with the previous values the first column is the first vector from above the second column is the second vector the third column is the third vector and we want to get the temperature up to 65 65 65 in the first three squares 85 in the other two squares 65 in the third square so we toss it into the non-negatively square solver and it says spend 30 seconds in this configuration 75 seconds in this configuration and 27 and a half in this configuration because these three vectors are numbers that I just kind of pulled out of thin air and they aren't exactly great the actual amount of error if you run through this is something like 20 degrees off of our target but for three vectors that I made up it's not too bad in reality it turns out that with many more vectors I don't know if they want me to tell you how many but many more vectors and across actual foods I think we usually get down to about 5 degrees off target in any realistic situation obviously if you have a piece of really cold water and you want to boil and cup right beside it we can't do that there's physics that we have to deal with but if you want to get food to different realistic temperatures we can do a pretty good job now that's the easy part the hard part is that how do we actually replay these vectors or how do we know that we can use these vectors and how do we get them so it turns out that food changes the behavior if you take a piece of chicken put it in then take a steak and swap it out the steak has a different shape it interacts with the waves differently you get a different standing wave pattern so oh shit it doesn't work anymore well the first thought people have is can't you just simulate it no first off if you wanted to simulate it you would need to have a very accurate physical model of the food and I don't think we want to hire someone to create an accurate 3D model of your piece of chicken before you can cook it and put it in it also turns out that the simulation itself is slow so simulating a single configuration takes something like 10 minutes so if you wanted to do enough for a full cooking run it would probably take oh all afternoon and then you could cook it and it would cook really quickly not practical so the next thing is people ask is can you throw machine learning at it the answer is also no machine learning likes problems that are smooth in a way so where you can take a single you can take a step and get closer to the solution essentially the configurations don't work that way what happens is as you move the paddles or a paddle you'll get the same configuration with the hotspot here and then suddenly there's an inflection point and the hotspot teleports or the standing wave pattern completely shifts so you don't get a smooth transition there's no underlying structure that we know of for the neural net to pick up I mean there is one but we don't know how to make the neural nets pick it up and it's not clear that it'll be faster to get that than to simulate so open research topic if anyone knows how to do it I'm listening so what we actually do is we have a we just reuse the configs you cooked something last time we got these heating patterns well cool so if you put the same or similar enough food in the same location we'll get the same heating pattern and it turns out that there is actually a pretty big window where food is same enough to replay it so you don't have to get an accurate model of each chicken breast you just need to know it's close enough and in practice it seems to work reasonably well if you have a constrained enough set of locations for the food so in the end here's the result and this is another video where we're going to heat the different cups at different temperatures and first off the red one is going to start boiling then we take a temperature measurement and I don't know if you can read it but we've got roughly a 25 degree 25 Celsius difference between the two cups put it in cold water swap it out same thing and now the green one is going to boil first and sure this is easy mode but it's still a good demo cool so I realized and it's an afterthought that some of you might wonder what this device looks like so I threw in a picture last minute slide change that's it our next talk is by Nick Biasco who is going to tell you about this one simple 600 year old trick to make your website more accessible use the technology I should have put another exclamation mark because this talk title already had an exclamation mark so I felt like I was cheating a little bit when it came to the CFP my name is Nick I'm a front end developer at Zero in beautiful Wendy Wellington New Zealand at Nick Biasco on all of your favorite social media and maybe some you don't like I don't know most of you so check that out this is an accessibility talk so I feel I'm contractually obligated to start with this hopefully rhetorical question why is accessibility important we all agree empathy and inclusivity is a good thing yeah cool alright sweet ass don't have enough time to explain why next slide so that we have a shared set of accessibility goals the communities develop the web content accessibility guidelines and they're organized around four principles your UI has to be perceivable by the users available senses operable by the users chosen input method understandable that goes for both the content and how to use the thing and robust usable by a variety of rendering engines and assistive technologies we're going to focus on one little tiny part of the first one perceivable specifically color contrast so level AA kind of mid-range accessibility you want to hit a contrast ratio of at least four and a half to one except for large text which you have to hit a ratio of three to one by the way this is text on an image that's not accessible don't do this in your product so this is important because not only are there users with the usual inherited color vision deficiencies but as people age the visual acuity isn't quite what it used to be and worldwide the fastest growing segment of population is age 60 and over so between the people who have trouble distinguishing color and the people who just need a little help seeing better we can spit ball about 10% of the world's population or the users of your product can benefit from better color contrast now when we talk about color contrast we really mean a luminance contrast or brightness contrast that's for a couple reasons first of all there are a bunch of different kinds of color vision deficiencies so if you work around one of them that doesn't necessarily mean that somebody with another one of them is going to benefit and also it's just easier to see the difference between light and dark than it is to see the difference between colors the cells in your eyes that detect color need a lot more light to work than the ones that detect the difference between light and dark do this is a 10 minute talk we don't have time to talk about the science of vision but it's in the longer version of the talk specifically the science of armory yeah or these things codes of arms I'm going to focus on the English heraldic tradition for two reasons first of all that's the one I know the most about and second they're the rules that I'm subject to is somebody who lives in New Zealand which is a commonwealth country that doesn't have its own heraldic authority now you may have heard this it's true it's a thing I also had to cut that out of the short version so you may have heard this referred to as a crest like the place the renaissance fair who print out your family crest that's not technically true the crest is just the bit that sits on top of the helmet for these purposes we're going to focus on the shield the most important part of the arms now arms have two main qualities they're associated with the person and they're easy to identify at a distance now people people been decorating things since time immemorial ancient Greek soldiers decorated the shields they carried into battle here's what a few of them looked like and some of them look kind of heraldic but they're not personally identifiable it's more like a t-shirt or a pair of earrings it's just something you saw that looked cool it's not something that just kind of identifies you as you and we kind of continue that decoration theme but we don't really see that one-to-one relationship until the early 1200s thanks to these closed face helmets now think about it got a bunch of people lined up in front of you in coats of armor and you can't see their faces and so if they can use a different shield every day like how are you really going to know who's who a little bit important so herald since the term heraldry heralds were the people in charge of organizing tournaments so it was only natural that they'd be the ones to keep track of who bears what arms so we start seeing the first rolls of arms the list of who here who bears what arms around 1240 this is the daring rolls one of the earliest ones from around 1270 or 1280 now this time people can just kind of start using any arms they want which is only really sustainable to a point right if we want to avoid like impersonations and namespace collisions right we need a single source of truth to keep track and so we have the offices of kings of arms and those were created around starting in the late 1200s 1484 king Richard the third organized them into the college of arms the body that's responsible for all things heraldic in the UK and the same body performs the same function today college of arms gets boss in 1673 the Earl Marshall the highest ranking hereditary non-royal office in the United Kingdom no arms are granted without his authority this guy with fabulous color is Henry Howard the 6th Duke of Norfolk he's the first Earl Marshall Earl Marshall to oversee the college of arms and this is Edward Fitz-Allen Howard the 18th Duke of Norfolk who oversees the college of arms today if you apply for arms you need to get this guy's permission so I'm showing you his picture to prove a point right the symbolism and artwork has evolved over the past several hundred years but the rules remain the same so check out these arms these could conceivably be from 4, 5, 600 years ago these were actually granted in 2017 to a gentleman named Robert Pitcher from South Yorkshire right and as you may have noticed heralds much like people in tech are notorious punsters so look at you have three pitchers on the arms so arms like these are called canting arms these are the earliest arms of these canting arms it's a tradition that continues today so every coat of arms back from the first ones we started writing about in the 1200s down to the things that come out today are described by the same language Blazen it's a specialized language used to describe the composition and arrangement of symbols on a heraldic device in the web world we have something similar we have HTML and CSS specialized languages used to describe the composition and arrangement of stuff inside a web browser so when you're granted arms you're only granted the Blazen the description of your arms your letters patent the thing you get has a really nice illustration of your arms but anything that accurately accurately depicts that is technically correct so you can think of this like rendering engines there's a spec they're meant to comply with usually but they're not always the same and they're not necessarily wrong they're just different so anyway Blazen so it has its origins in the 13th century so it sounds decidedly to have middle English with a lot of French mixed in 700 years ago we didn't have the wall of pigments at Home Depot so we have a limited color palette to work with so tinctures fall into three general buckets we have our colors which are paint colors our metals which were frequently actual metal, gilt and silver and gold or due to budgetary restrictions white and yellow paint and furs which yes were frequently literally furs or stylized if you're painting them so these are the arms of Groverner early arms, azure, ebend or so azure, blue background ebend, a stripe going from the upper left to the low right if you're looking at it or which is yellow or gold and this language scales up too these are the arms of Winston Churchill we've got quarters on quarters we've got inescussions, cantons so even though Winston Churchill lived in the 19th and 20th centuries if you gave this Blazen to a heraldic artist from the 16th century the 15th century and asked him to paint it to get something pretty similar so once again to recap arms are associated with the person and easy to identify at a distance now in tech we have our computer way of identifying how easy it is to see something, the contrast ratio so the same thing developed in heraldry it's one of the most important rules if not the most important rule the one simple 600 year old trick that makes your website more accessible it's called the rule of tincture first comes up between about 1510 and 1450 depending on who you ask but the best known version of course I'm sure you're all familiar with comes from the Welsh scholar Humphrey Lewitt in 1568 metal should not be put on metal nor color on color so no metal on metal, no color on color let's go back to your tinctures here so colors, your paint colors a little bit darker, your metals were brighter the lighter, so what do you get by not putting a color on a color or a metal on a metal bingo, accessible color contrast alright, so let's just check it out so remember 4.5 to 1 is our aspect, is our contrast ratio we're shooting for here so we have Argent on azure, we hit 5.32 pretty nice, it's a metal on a color let's put a color on a metal we have ghouls on ore, 4.1 double A for large techs, not bad let's put a color on a color let's put sable on purpure 1.8 doesn't work, right and even if you squint, you can kind of tell you can't decide the two on the left but maybe not so much the one on the right so let's talk about something more webby so we have this counter moving down the gradient and hopefully this thing will load eventually hey, there we go so when it's on the darker side it's clearly a color, so it uses white text so you can read it so when it goes to the right hand side background is a little bit lighter so for the text to be legible it has to be dark, a color so England does take the rule that things are a little bit more seriously than the continent it's broken sometimes, but not very often still only less than 2% of arms violated yes, scholars have studied tens of thousands of arms and found that only 2% of arms violated so for something that really was not codified until we were a couple hundred years into it it's about a hard and faster rule as you can kind of put together, right so even if we do break the rule sometimes we can always get better so when it comes to accessibility and inclusive design incremental improvements are still improvements so we just need to set a high standard for ourselves and keep working toward it so we can make the web a better place for everybody thanks I have great speaker notes so for the final talk of today we have Arshia Mufthi who is going to talk about how DNS doesn't work hi, can everyone hear me? cool so my name is Arshia and I like to do a lot of things on the internet some of which I will talk about today I like looking at Harry Potter slash fiction I like tweeting I like looking at Rust documentation and all of this is possible because of the world's largest and my personal favorite distributed system which is DNS which stands for domain name system for those of you who didn't know about it so this talk is going to be about DNS before I talk about the design of DNS and the structure of DNS I want to talk about Elizabeth Feindler who was created by Elizabeth Feindler who in the 70s and 80s managed something called the network information center the network information center was basically a reference for the state of the internet as it existed at the time which was basically a bunch of universities and government projects but often people wanted to know who was on the internet what were the addresses of machines on the internet and that's what the NIC provided but soon email came along and everyone was getting on the internet and trying to send messages to one another and they were growing and there was this need to sort of let people know that if they wanted to address a certain domain how to get the address of that domain before the way it worked was people actually published handbooks and mailed them out to people who needed that information but as email came along and the internet grew Elizabeth was like hey let's automate this because we're all engineers here so this is the automation that they did there was a file called hosts.txt and it basically copied over the network onto every machine that was on the internet and it basically contained a list of every domain like CMU 11 ETAC FNWC and mapped it to an address and this eventually grew and scaled up and became better and more fault tolerant and more resilient to become the domain name system that we know and use today so this is how DNS works my laptop is like hey DNS I need the address of tumblr.com and this address is like here is a long 32 bit number possibly even a bigger not a 32 bit number if we move over to IPv6 quickly enough but this is the address of tumblr.com and the slash fic loading begins side note the addresses here are really weird they're not IP addresses this is because this was pre TCP IP they used a protocol called NCP I don't know what NNC stands for and the address space of this protocol was only 256 because I guess they didn't think they needed addresses that were more than that so a couple things about DNS the first is that it's relatively fast if my laptop needs to first look at the address of something and then go to the address you want that first step to be as fast as possible the second is that it's pretty reliable if you can't connect to a web server or load a website it's probably because there's a DNS error and more because the web server itself has a problem and the third is that it's pretty correct so if my laptop wants the address of tumblr.com it's probably going to get the address of tumblr.com and not like twitter.com or facebook.com and this is pretty interesting because shit is messed up behind the scenes like machine breaks data is missing or far away or not available connections drop all the time and so basically what I'm trying to say is that distributed systems like DNS are lying to us all the time because they're quietly breaking and failing and crying but at the same time like this is possible so DNS seems to work and not work at the same time and this is what my talk is going to be about so say I go to Waterloo still a student so say I want to go to cs.uwaterloo.ca to like look up something because I want to look at the the other thing that works is that addresses are stored in a hierarchy so at the top we have at the top we have something called root servers which store the address of all the tlds so .ca.com.gov.eu et cetera a level below that there's tld servers which store the address of all the domains that end with that tld and so on and there used to be 13 of them in the world labeled A to M because programmers and 10 of the 13 of them were in the US because politics and this was kind of good at the time because if you had 13 machines and a couple of them went down that was kind of okay right because you could still like reach some of them but this isn't good enough now because having the weight of like the whole internet rest on 13 servers is just bad so in reality DNS servers are not called anycasts which means that instead of one IP address mapping to one machine they map to multiple machines and this feature of DNS is super essential because it adds redundancy to DNS and this adds a couple of advantages the first is that it makes DNS resilient even if like tens or hundreds of the thousands of servers that are up go down that's still fine because traffic can still go up DNS queries can still be resolved and we use Twitter which is what matters here and the second is that it makes DNS fast because having hundreds or thousands of machines instead of 13 means that the average request has to travel less far to resolve a query now speed is really really important to DNS and anycast does provide that but using anycast still means that you have to talk over the network so there's only one there's one major strategy that we use to add speed to a system caches love caches so as we saw before, DNS queries start with the root name server and they go on to the TLD name server and then they go all the way down oops but each result that the DNS server gets is actually cached at every level so the root name servers, their addresses don't change very often so they're stored for a very long time com TLD servers their addresses change every so often so they're stored for a long time caches.com might change more frequently so the address is stored very often so here's a fun little example I used a tool called dig to resolve caches.com and this is the IP address of the local name server that I was connected to and the 3600 up there in the DNS record means that that name server had cached the record for caches.com for 3600 seconds or one hour now speed is a really fundamental part of DNS you might think that DNS would just be like slow if we took caching out but it would legit break if every cell phone and server and internet connected toaster issued queries to the root name servers to resolve DNS queries caches are really really integral to DNS being what it is if caches didn't exist there would just be too much traffic so servers would get overwhelmed and queries would time out second applications would be way slower now if the network first had to use if applications first had to talk over the network to get the address of the server and then use the address of the server to actually load what they want to now all of this work is being done by the resolver of whatever network you're connected to why is there a coffee oh right yeah I'm in a cafe that's why I'm drinking coffee so the point I want to make is that as we resolve DNS queries what our laptop is doing where our computers are doing is trusting the resolver of whatever network we're connected to to tell us the truth so I trust that when I ask my Chrome browser or whatever to load cats.com or tumblr.com for me that's exactly what I'm going to get so besides caches and besides redundancy provided by any cast the third unspoken component of what is really making DNS work in this scenario is trust trust is really not something we can afford to give out because servers lie all the time and lying DNS servers can intercept requests and intercept responses steal information, spread viruses and so on especially if you're on an unencrypted connection which DNS by design allows and people have taken advantage of this in the past so in March of 2014 people were not tweeting great things about the Turkish government and so the government of Turkey got mad and talked Twitter in all of the country and they did this at the DNS level they did this by like configuring the ISPs in Turkey to direct twitter.com to like a Turkish website instead of actually Twitter and in this case people were able to get around it because they directed their machines to Google's public DNS server which like they spray painted all over the city which I thought was pretty great so to end this talk I want to talk about some thoughts I have about like the design of DNS and DNS came around it's because everyone was getting on this hot new fangle technology called email and they wanted to use the internet so the concern that the network information center and that Elizabeth Feinler had was scale not security but trust without security has not worked out so well for us because people, malicious actors have used it to create DDoS attacks and man in the middle attacks and poison DNS caches and this has led to like serious incidents in the past and I'm really interested in solutions that people are creating and proposing to sort of directly tackle this so a really simple solution or a very common effective solution is TLS TLS is used whenever you use HTTPS instead of HTTP to sort of browse the internet on your browser and this does two things A, it encrypts messages between servers so even if information is stolen it can be decrypted and B, it requires servers to prove who they are using math and cryptography and fancy stuff but the point is you will have a guarantee of like who you're talking to and the second is this really interesting thing that I learned about and read a lot about on the CloudTrip blog called DNS SEC I think and DNS SEC is a set of security extensions to DNS so it sits on top of DNS and it proposes all these cryptographic signatures that take place at each process of the DNS lookup to sort of authenticate every DNS record and send back and forth as queries get resolved and it was introduced in 2005 and it has the people who have been trying to get everyone to adopt it have been sort of dealing with a lot of back pressure because you need to provide backward compatibility and it's extremely hard to deploy given how widespread and big DNS is and the skill at which they're operating but I'm really interested to sort of follow the journey of like DNS SEC and see how it's being adopted and I really recommend checking Cloudflare out they've written some really good blog posts on exactly what DNS SEC does and that's it that was amazing thank you thank you that was amazing and thank you to all of our speakers today can we give them all a big round of applause and I have two more quick thank yous I'd like to thank our sponsors again our phenomenal sponsors UC Santa Cruz thank you for the space our phenomenal sponsor Stripe, Github, Twilio and Airtable and our awesome sponsor the Rooker Center I'd also like to give a quick shout out I'd like to give a quick shout out and thank you to Cindy and Shelby from Confreaks for making our live streaming and recording possible thank you cool I have a few logistical notes and then we will be leaving here for today so for folks who signed up for dinner groups with strangers which was not mandatory as a sort of a part of the unconferencing please meet near the tree outside get leave here outside of the door near the registration table for folks who are interested in unplanned dinner groups who may not have signed up please meet in the courtyard and facilitate amongst yourselves the shuttle buses are leaving at 5.05 p.m. that's pushed back from the website by five minutes so you have an additional bit of time to get out there at 5.00 p.m. in Baskin circle so that's also where they dropped you off this morning if you took those and those are going to the Hampton Inn as well as to downtown on your way out of here take a look under your seat take a look under the seat to your right take a look under the seat to your left and pick up the trash and trash it in a trash can on your way out excellent while you're doing that take a look around your neck maybe you have this lanyard on remember to bring it back tomorrow we're actually very short on the insert so if a few of you forget that's fine if the rest if everybody forgets that would be less fine so please remember to bring that back and a very serious reminder that the code of conduct for Bainman Con West applies at all times at all events around this conference so if you're hanging out with people that you've met at this conference physical space our code of conduct applies if you are looking to make a code of conduct violation report or if you would simply like to talk to an organizer about code of conduct violation you can use the contact form on our website and that's at bainmancon.com and there's a link for conduct breakfast is tomorrow morning starting at 9 a.m. it will be here and the keynote starts at 9.30 a.m. so so excited to have you back tomorrow thank you everybody