 I'm Luke Imhoff. I write the IntelliJ Elixir plugin for JetBrains IDEs. I also run the Meetup here in town. And I'm going to show you how beam files work. So for a long time, IntelliJ couldn't do completion for Enum because most people install with Homebrew and you only get beam files. A lot of people said use the Alchemist server and just have it do the stuff, but I didn't want that because I wanted to be able to use the index and cache built into JetBrains IDEs. So instead, what I decided to do is just let's just start decompiling everything. And so now I can decompile stuff and it turns out that the beam format is not that complex. It's actually based on this interchange file format that EA and Amiga created in 1985. So if you ever became a programmer like me because you want to get into video games, we are all video game programmers now. The format is exactly as it's described except that it has a special marker beam instead of 4-1 to say what's going on. It's a very nice, very compact binary format. It has sections. It's like a TLV format. All the names for functions are in an atom table and all you do is everything is indexed. So every atom and every function name is just an index into there. There is some tricky stuff about the actual format though. So the actual format for macros uses this all caps macro. And so there's four formats we have to support when decompiling. Infix operators because you need a left and a right, prefix operators because sometimes they don't work. And the reason for this is that prefixing plus and minus in the actual form for plus and minus in the actual decompiled version of kernel. So if you do kernel plus, if we actually look here, we can actually jump and see all the source files. So we can look in dict. There's actually a hidden function in all our modules called info where you can reflect on it. I don't think anyone really uses that. But we get modules so you can see using. You can go in and look at cool stuff in kernel. But there's weird ones like bang that has to be treated, especially special infix operators at. There's also weirdness where it just isn't parsed right. So you have to surround in parentheses to make it the actual depth parse work correctly. And this is what it looks like in the real code. It's not just how I'm decompiling it. And then there's also little weirdness where you actually have to, in the code, in special forms, some of these just don't work as macro names. So you end up having to inject the name of the function with unquote. So if you ever want to make a function or a macro that doesn't have a valid name for the parser, you can just use unquote to shove it in there and get a function with a weird name. And then you can start calling it with apply because you can make it an atom and make it callable that way. That's it. All right. So the good news on when you're using Elixir and Phoenix is our good friend Bruce Williams of Ruby famed designed with, designed a plug-in that you can use, collapse. That allows you to stand up a GraphQL server pretty easily. So I think for, I don't know, how many people here, when it comes to client server, I think in the computer, computer science classes, we have a few cadavers. How many people here remember SOAP and Corba and have used SOAP and Corba? Yeah. So you felt the pain. So I think there's kind of a bit of a change from what we started doing Web stuff 10, 15 years ago to what we are now, when we need to power more, you know, advance interfaces in terms of, you know, how much, you know, how much data we have to publish in and out of a client or have an API. GraphQL is really a specification. It doesn't specify that you have to necessarily use a database as a backend. It could come from, the data could come from a service. It could come from, you know, third-party data, legacy database or other databases for that matter. And also, it doesn't specify the protocol. You don't have to use HTTP. It could come from a Web socket. It could come from, you know, an RPC connection or anything like that. So the cool thing here is, you know, with absence, all you have to do is, in your mix file, there's actually three plugins that you have to put in, absence, absence plug and absence Ecto, which basically allows you to do, you know, database type stuff and resolve some of the issues that you get, you know, by doing GraphQL schemas where you could traverse down, you know, pretty, you know, high level, deeply nested data and, you know, kind of allows you to resolve some of the things like, you know, if you ask for, you know, the N plus one, you know, problem where you have to ask, you know, for, you know, the same record and hit the same database table for, you know, the same information. Additionally, they have a plug that basically, you know, allows you to interrupt at the router level and allows you to define three routes. GraphIQL, which has been called as the killer app in the GraphQL community, which is a really nice idea that allows you to explore, you know, your schema and your data. And then the GraphQL data point that basically allows you to send out the queries to. So once you have those set up, then you can go into, you know, defining your type. So you may feel a bit redundant here because, you know, if you're doing with a database, you already have, you know, previous talks talk about, you know, the data and the schema. So you already have defined your Ecto schema. And here you kind of have to do the same thing. So here I'm doing, you know, tennis events, you know, for the Australian Open. I hope we have tennis fans here. Where basically the event has an ID, a name, the year of the event, and then a series of draws that has, you know, matches and players and ranking. So here you have the ability to basically describe the data. So I think many times we get to a point where we define the API, but then the documentation of the API falls apart because we go to, like, Swagger or IPRE to get, you know, that information. And eventually the actual APIs is revving much faster than the darks. And then newcomers to the project are people that needs to deal with calling those APIs or building UIs on top of those APIs get, you know, disrupted because they don't know what the schema looks like. The cool thing here, I'll show you, hopefully. My thing is all screwy. Of course, it worked perfectly. So nervous. All right, let me just show you this really cool thing. So basically here I've got a schema defined. And I'm so nervous to get a point where we type. But this is the ID that you're looking at, where basically we can introspect the schema, so I'm trying to, okay. I won't show the thing. So anyway, please take a look at it. You have to kind of like think differently, right? We've been conditioned for the last ten years in thinking in terms of resources and CRUD. And we have passing the point in some of the applications that we are writing because we have to span across several resources or navigate at different levels. It's not surprising to me that Facebook, the creators of React and Relay for which GraphQL came out actually also designed Cassandra as a database, which most of the time is actually used as a graphical database back end to store the data. So therefore you can traverse things based on relationship instead of just resources and their association. So I think that's a really interesting paradigm that I wish everybody can take a look at it and see what they think. I've been using it for the last couple months or so. Absence to be honest, could use a little bit of more TLC. There's some stuff that, the flaws of not having ordered maps in the framework kind of kills it. But anyway, thank you so much. I'll be here all week if you want to talk to me about this. Actually, I have slides and I had a GitHub repo. I can send that over. Thank you very much. All right, so this is a real quick historical tour. Who recognizes that? Okay, hands up, who's actually used it? More than I thought, yeah, a bunch of really old thoughts here. So please, when we're finished, let the old people leave first. Okay, just to be nice. So this is paper tape. It is tape that has eight data holes plus one sprocket hole. And they're in columns on the tape, all right? So the way it's read is that you're starting on, in this particular case, you start on this side, the left-hand side, and that's the low order bit, so that's zero one. The next one is the next bit, so that's two or zero, four, eight, 16, all right? So you've got eight bits going across. So how many different characters can you represent on a paper tape roll? All right, two to the, it's 256, all right, make it easy for you, all right? Except being mechanical and being relatively old, it means that there were errors encountered, so they actually used the top most bit for parity, typically. And so you'd actually get seven bits for your data and one bit for a parity bit, all right? So far, so good. Now, you lose a couple of combinations. So for example, if you make a mistake when you're typing paper tape out and you're like 18 feet into the tape, you really don't want to go back and redo the whole thing. So if you actually punch all the holes out, then that's not read as a character. Which is why the character 255, or actually even 127, is counted as being a delete character, right? Because that's how you delete on paper tape, which is wonderful. And from paper tape, we get the ASCII code sequence. So we actually have like 32 characters which control characters. And then 100 and whatever's left as a 90 something left that are being your data characters, all right? And so we have ASCII, which is lowercase uppercase plus impunituation. How old do you think, okay, how many people think Unicode is at least? Well, it's gonna be hard to do that. Forget it. Unicode is older than at least half of you, all right? Unicode has been around since 1990. So since 1990-ish, I mean, a bit later on we actually got to go back to the standards. We've actually had more than 96 printable characters available to us. And yet we don't use them unless we happen to be enlightened and live in the rest of the world. But here we don't use them, and that is a ridiculous shame. So I am here today to tell you to start using damn Unicode when you're tight, all right? And stop with this bullshit ASCII crap, all right? So, first thing you need to learn to do is learn your keyboard, all right? Now, most of you are on Macs, and if you're on a Mac, it's really, really easy, all right? So let me, this is text edit. If you don't know your keyboard, bring up the keyboard viewer. And I know it's short, I don't think I can actually make it any bigger. But don't worry about that. And that shows you what's gonna happen when you hit a key, right? So if I hit the A key, I get an A. But if you hold the option key down, you'll notice the little keyboard viewer changes, and it shows you all of the various characters that will then get typed if you hit a key. So for example, if I was to hit, I don't know, S, then I get one of those German double S things, whatever they're called, yeah? Cool. You'll notice some of the keys have orange on them. That means they're combining keys. So for example, if I want a acute accent, you'll notice that's over the E. So if I want an accented, say, O instead, I can just go that, and then type O, and I get an accented O. All right. So if you're typing, it shows a believe from now on, you have no excuse to call him Joe's. Right. Now, how often do we use these extra characters? Not enough, because English has things like punctuation. In the English language, that is not quoted text. In the English language is quoted text. You earn so much respect if you send emails using curly quotes. All right? On the Mac, it is option square bracket, option shift square brackets. Similarly, you can say the cats is a curly apostrophe, yeah? Option shift, close square bracket. There are incredible number of quotes available to you on this keyboard. So as well as using just like regular quotes, I can do things like open a YMOT, yeah? And close it. That's option and a backslash character at the end of the keyboard there. So that kind of thing, learn. Oh, one more thing you have to know. Option semicolon is an ellipsis. No more three dots. Ellipsis. Please. Okay? Please. All right? Set an example to the rest of the world. Do this properly. Now, sometimes there are characters that you type so often, but you can't actually get to them directly through the key map. And there's all sorts of ways of getting around that. But what I do is I actually remap my keyboard. So for example, I wanted to create characters for place holders that were kind of like very unusual. So I got those guys, which are mapped to comma and dot on option, right? And I also use arrows quite a lot. So if I use the shift versions of those, I can get those two arrows. How do I do that? I use on the Mac a utility called ukulele where you basically bring up a keyboard like this and you tell it what characters you want when you type. So if I wanted to change what happens on option, whatever, I just hold down option, then go to that square and type the character I want. So that's pretty cool. All right. Last but not least. So here I am, and I'm typing code. This is some wonderful elixir code about what kind of license it is depending on your age. Now, this is all in ASCII. And it's in ASCII because that's all Erlang can deal with. It's ugly as sin. What can we do about that? How can we make it prettier? Well, it turns out that if you haven't come across this before, there is a wonderful hack. You may have come across this idea that in type setting, if you type some combinations of characters, and I have no idea if this font supports it, let's find out. No, it doesn't. Okay. There are ligatures. So typically, if you look at type set text, F and a lower case I will be joined together. Yeah? And that's called a ligature. You take two characters, you combine them into one. So here is my elixir code. And I'm using source code pro, I think for this. Yeah, source code pro. So let's change that. Let's change the font to a font. So this is Fira or Fira, one of the two. And all I did was change the font. And it automatically detects sequences of characters just like FI. But here, for example, it detects a hyphen followed by a greater than and says, oh, I can replace that with an arrow. I can replace, you know, greater than or equal to with these two. I can replace the pipe character, you know, this one with a pretty little triangle. If I want to do not equals, it looks like that. The underlying source file is identical to what it used to be. It's just it uses ligatures to do it. Stop making things look pretty, please. Thank you. OK, this talk is about talking to your machine. When I think about technology over the last 30 years, I think most technology or a lot of technology is geared towards communication and being able to sort of shrink the world, bring everyone closer. And that can be, you know, from being able to only make a phone call at your house to being able to use your cell phone or SMS or any other forms of communication. Communication is sort of the great driver when I think about technology. When you think about fax machines, it's to be able to be able to send a message and have it be on paper as if you sent a letter. So technology, though, when you think about communication, doesn't necessarily bring everyone closer. Sometimes that technology backfires. You end up at dinner and everyone's staring at their phones. But, you know, I think about this quote, the future is now. Soon every American home will integrate their television, phone and computer. Does anyone recognize this quote? It's from a movie. 1996, cable guy. But, you know, since that time, we've gone back to our roots. We're no longer just sending an SMS. We're talking to machines with Siri 2011. And now Amazon Alexa in 2014. And with Alexa, we can do a lot of sort of like custom skills. We can do integrations to sort of open it up. It's a more open platform than what Siri allows. So what does an integration with Siri look like? So you can ask Siri, let's say we have an app called Amazing App. We can say, Alexa, ask Amazing App for today's fun facts. So we can break this down. Amazing App in this case is our invocation name. So on Amazon, we tell it, okay, I'm making a skill. I'm going to invoke that with Amazing App. And for today's fun fact is an utterance. So when you think about the way you might ask for something like today's fun facts, someone might say, what's today's fun fact instead of for today's fun fact? Or give me today's fun fact. There are all sorts of ways people might ask for the fun fact. You put that information into your skill and we derive an intent from that utterance. In this case, it's fun fact. So if we kind of work through the process, how this utterance gets transformed, voice data comes through, the voice command gets deciphered into an utterance, and it sends a request. This is where you get that request and generate a response. Your Phoenix API could handle this. That response gets sent back to Amazon. It validates it and then sends that back to someone's echo or dot. And they hear the speech. So what does that request come across as? You have the intent and you also have this concept of slot values. So for example, if you say, what's the weather in Boston? Boston in that case would be the location and it would look up the weather for that. Spoiler, it's got a foot of snow. I'm glad I'm not there right now. Then there's session data. Session data would be if you've got a long running request, maybe there's multiple responses that form an overall interaction with Alexa. And you can store that in session data. So using the Phoenix Alexa package, it comes through and sort of derives some of these things and pulls it out and it ends up hitting a function called intent request. And it has the con and then the intent and then the request which contains that session data and any slot values might be present. And this is where pattern matching comes into play. I can write a number of intent request functions and just key in on, oh, this is the fun fact. Fun fact, a single cloud can weigh more than a million pounds. And then you can say terminate session true and that ends the engagement there. But then let's make it more complicated. Anybody remember where in the world is Carmen San Diego? Okay. So you can build where in the world is Carmen San Diego. Effectively the way this game works is there is a clue that is given. And from that clue you end up having to make a guess on where some henchman of Carmen San Diego is located in the world. Fun fact, this game was created because there was a survey by National Geographic that found one in four Americans didn't know where the Pacific Ocean was or the Soviet Union was. So they said we've got to educate these people and they made a game show out of it. So let's do that. So you would start it up with an invocation name like Alexa, ask where in the world is Carmen San Diego? And you might get a clue like the henchman of where after is hiding out in a famous square of this capital city. Although originally named for its beauty, the native word for beauty is very close to the color it's commonly referred to by. A Athens, B Moscow, C Budapest. And then we might include some session data. This is round one and the contestant has $50. You can say my answer is B Moscow and that sends a request to your app with the intense slot values and session data. That would be the intent is an answer. A slot value would be in the session data of the dollars and the round. And we can answer this request and respond to it very effectively using pattern matching. And you can take this further. You can say let's make a choose your own adventurous book and be able to derive kind of where the story goes based on responses in a long session. And this sort of thing kind of opens up the door for the way that we can interact with our computers and interact with each other. And it reminds me of kind of novel ways that this sort of interaction comes into play. Does anyone ever remember this? Twitch plays Pokemon? No. Basically, sort of a random, anonymous Australian created a way for people to play this game together on Twitch. And based on what people said, the character did different things in the game. Fun Facts. 1.16 million people played it. 121,000 peak concurrent users. And it took 16 days to beat the game. I work for EcoBind. We've had the pleasure of doing some Alexa integrations recently. And I think it really opens the door. And it's kind of nice going back to speech as a way of communicating versus text and all other sorts of manner of speaking. Thank you. My name is Paul Kinney. I'm from Vinly. We're a connected car platform out of... And first of all, I apologize. Puns are hard when you don't have a lot of time. That's practically correct. We're a connected car platform out of Dallas. We do ingest lots and lots of data from different telemetry sources out of vehicles, do a lot of stuff with it, and then we expose it to developers. We have a lot of APIs that we use, and we actually are now the largest app ecosystem for the connected car. As part of that, we try to work in modern stacks. We want our developers, our internal developers, to be versed with what's out there, what's cool, what's new, so that when we're dealing with developers, we have a new language. So because of that, we try to become a microservices shop, so we have... Make it really easy for our developers to try new things, try new approaches, try new languages, whatever. We start off with Node.js as our primary. Everything on Docker, all containerized, just like you're supposed to do two years ago. But we ran into an issue. We do a lot of math on very long trips, and especially our device collects data every single second. So some of these trips that are multiple hours long, there are a lot of points to go through, and a lot of that is GIS processing. So take a path, figure out, okay, where's the GPS jitter, get it out of there, simplify it, things like that. The problem is we were doing this all with Node, which was great for a while until we started to get to the point where Node would stop the event loop to do a lot, a lot of math, and during that process, lots of things go wrong. We use Kubernetes, Kubernetes times out, starts killing off pods, lots of things go bad. So we knew we needed to find something better. We loved the concurrency of Node. We didn't want to lose that approach. We didn't want to lose the modern nature of the language and being able to work with it easily. So we landed on Elixir. And honestly, Elixir is not a language a lot of people think of when they think heavy GIS math, but it does end up providing, because of the nature of computational geometry, there's a lot of arrays, a lot of going through data and serial, great things for functional languages. So it actually ended up working out really well. We ended up seeing an individual processing time for each of these trips probably dropped by 50% to 80%. So it got faster, but at the same time we were able to do 10 to 100 times as many at the same time, because we didn't have that event loop stall we had with Node. As part of that, we built a whole lot of libraries. These are all on Hex. The big one is Topo. It's a computational geometry library. It conforms to Open GIS, and we have the full JTS validation suite in there. It works great for figuring out polygon or sex line, line string of multi-polygon covers, multi-points, things like that. All things you need to do that math. We've clocked it pretty high. We've actually started using a lot of our other pieces as well, so we're doing collision detection and things like that on Elixir. And a bunch of other things that had to be built underneath that to support it. And so now, because of that, we now get to enjoy Elixir on a lot more other things. I think now maybe 30% of our services, 20% of our services are all Elixir. We have maybe 70 microservices we run. All of our front-ends are now we're converting them, either converting them or have started them fresh with Phoenix and Channels. So it's been a very good, it was a great introduction and it's not probably not the normal way that most companies come into it, but for us it was a great way to get started with Elixir. Oh, and unfortunately we also go tagging on there at some point in time. So yeah, please check out Vinly, dev.vin.li. It's great developer resources, let us know what you think. Next time you have to build a connected car application we hope you think of us. And download Toppo, check out the repo. That's it. Okay, so recently I well, a long time ago actually the local meetup saw a presentation on HedgeWig, Slackbot, Adapter, or actually any, it's an it's a framework for any, you know, bots and you can either adapt it with XMPPs, Slack or FlowDoc. I chose Slack, but so this presentation is like, oh it's real cool. We need to integrate with our CI platform to be just automating some stuff. So I started it and then stopped it, but I recently picked it back up last weekend, so if you're interested in HedgeWig watch this. So I named my bot kit because I'm a child of the 80s. You can see it's offline right now. Started up. It'll take a little bit to get going. But the docs, you know, say there's like a mixed task to get it started. I use an umbrella app because we are kind of expanding our app. So did all the mixed stuff to create the umbrella app called bot, appropriately. This is kind of the general boiler plate that gets generated at a worker. Starts it up. But then, or that's the mix, and then this is also stuff the mixed task generates. Nothing terribly special there. And then, so the first thing you start off with is creating a responder. And there's two types. There's like respond in here of the macros. And respond is that you have to specifically talk to your bot. I named my kit. So you have to say like kit, something, something, something. Or it can also just listen for certain keywords and then respond to those. So in this case I wanted to deploy a branch to our QA environment. But the additional thing is that it also adds an environment which is like these build parameters environment variable, which actually will then trigger CI to say, oh this is not just a build, it's actually a deploy. Just to automate that task. Down here you see this is for the CircleCI client which is just a little more than HTTP Poison with a little bit of extra on there and matching the response, depending upon the response, you'll either get a successful in this course response to Michael because it's a kit and Michael I have deployed the branch if it's successful, if it fails it gives a reason. Let's see here. And then for testing, another thing that at least at the time I didn't see this in the docs or it wasn't there but I was like, well how do I test these bots? You can use of course how they test, how the library test with the robot case and then tag your test appropriately and it starts it up and tears it down. But I also added in this to check out, well I'm relatively new to Elixir. How should I test APIs? I didn't want to do VCR for all the reasons that some of us hate it. So I was like, oh, bypass and I saw this really great article on bypass and so it's basically a way to tag it and have it mock the HTTP server. So it's kind of a little more integrated test but then in support I threw in this module to bypass circle and with the tag options and everything set up, it can take your tag and actually say, okay well I expect this body, I expect the content type and then you know, mock that return value and then you know, you can test it locally and then when you want to as I have in this little Remy if you include the integration flag it'll actually run and hit CI and do the whole thing. So real quick, looks like Kit's back online so you know, you can do kit help and you know, if you use the documentation here, usage, it'll help users using HedgeWig and then so let's see here, let's do deploy so here, kit deploy develop branch to oh wait, branch on fake app want to create a fake app, fake repo this is a fake Slack account you know, so your credentials are good for development don't use your production on development just in case and then to QA, all right oh, so it responded, let's check CI it's kicking off a new build with the environment variable everything goes to plan, it'll deploy and then just one last thing is the also create like a fun responder so you can create a whole bunch of different responders modules with a macro so I left this one in there for fun as the name implies it's just for fun so like you can ask, I didn't document it just kind of as a little Easter egg and it's through that one in there but if you haven't checked it out, it's really useful and bypass in particular I found really, really nice to be able to test the API I feel like once I got going with the tags I could kind of test some various conditions on the API and I think I'll look to use that again in the future