 Oh, everyone, good morning. Welcome. You guys are getting a special treat. I am fairly certain this is the longest title for a talk I have ever given. So my name is Michael Snoyman. I'm the VP of Engineering at FP Complete. Happy to be talking to you guys today about why and what. And actually, that's going to be the first important thing. Why why? Most people have no idea how to pronounce W-A-I. We pronounce it why because we like to confuse people. All right. So why is the web application interface pronounced why? This started many years ago, and I'm not going to tell you. I actually should have looked up how long ago I started working on why, but it's years and many years ago. And I did it because I wanted to do web development in Haskell, but I didn't want to have to actually maintain my own web server. That turned out to be a joke because I do actually now maintain my own web server. But the basic idea should be you don't have to write a web server just to play around with some ideas of writing an application, writing a framework, or playing around with other things. This is an unique Haskell concept. It's not like Haskell invented a new thing that didn't exist in the world. Who here has ever heard of Rack in Ruby? So if you've heard of Rack in Ruby, then why is the same thing for Haskell? You may ask, why didn't you call it hack? Someone else did. OK. Why is an interface? It's shared by multiple frameworks. So I wrote a framework called usode. There's another framework, servant, Scotty Spock. Probably a few others I'm not remembering right now. All of them target why. So it works as this underline interface that lots of different things are able to use. As a result of this, not only can you have these different, let's call them front ends, these different frameworks or applications, you can also swap out backends. So you can use a real web server. Warp is the one that we use most often. But you can also have this testing backend. So you're able to go ahead and write tests for your application without ever actually doing any kind of network traffic. Who here knows that you don't have to test Haskell applications? That's a lie. You must test your Haskell applications. Don't? Thank you. Who here knows what CGI is? Now put your hand down if you think it has something to do with movies. CGI is this old thing that only dinosaurs like me actually remember. Anyway, why has support for it? Because why not? It's still there. The other thing that we've got is middlewares. We'll get into what all these different things are. But middlewares do kind of what they sound like. They sit in between a server or some kind of a back end and between your application or your framework. Jump right in, because everyone likes looking at code. This is a Hello World application in why. We have imports. We've got three import lines, which is pretty benign for a Haskell application. We've got a language extension. Usually there are at least 20 of those. We've got some stack stuff at the top to tell which version of GHC in the library is to use. And then we've got this thing. We're going to run something listening on port 3000. There's some weirdness about lambda and request and something about send. And we're going to make a builder response. We don't know what a builder is. But it's going to have status code 200. It's going to be text plain content type. And it's going to say Hello World from why. All the things that don't really make sense right now will make sense by the end of this talk. So what are the goals of why? How is it designed? What is the motivation behind it? So the high level motivation is we want an interface, but how do you design that interface? One of them is minimal overhead. It should not be, there should not be a situation where you say, hey, I'm writing this web application, but I need to get a little bit of extra performance. I better stop using why and use something else instead. We want to make this as minimal as possible. And that includes things like not doing unnecessary parsing. That will come up later when we talk about, say, post request bodies. To be unopinionated, I told you four web frameworks that all work using why. At least three of them are very different from each other. Scotty and Spock are kind of similar. But Servant and Yusod and Scotty Spock, they all work in very different ways. Yet they're all on top of the same kind of framework. There's an application, at least one very popular application, Google, doesn't use a framework at all and targets why directly. And yet somehow this one framework, this one interface is able to work for all these different kinds of cases. Should be extensible. There's no way that I'm going to design why with the ability to handle every single use case that anyone could ever possibly think of. So we have some options built into it for extending it. There's a vault inside of it where you can stick arbitrary key value data and things along those lines. And if that sounds weird in a statically typed language, it is. But it does work. It's stable. I think it's been about four years since the last breaking change in why. At this point, we don't want to change why. We want it to stay the way it is. We want people to be able to write an application and have it continue working for a long time. And finally, batteries are not included. So this phrase means all these cool things that you want to do. You want to run a web server. You want to do logging. You want to do gzip compression or anything else. That is not included with the why package. The why package is tiny. It defines a few basic things that I'm going to explain. But instead of including all of that in the base, the idea is that the batteries should be available. Which brings us to the common packages that we would use with why. The why package itself includes the core data types and a few tiny utility functions. Warp is a package which provides what we'd call the de facto standard server. I don't know if anyone else has written any other web server for why. But at least 99 out of 100 times, if someone says why web server, they're talking about warp. Why extra is kind of where the batteries get thrown. It's got a whole bunch of different middlewares. It's got some helpers. It's got some parsing stuff going on in there. And we're going to use it by the end of this talk. Why conduit is, so conduit is a streaming data library that I wrote. Why conduit is an interface to let why and streaming data tie in together. And this ties in with this idea of it being unopinionated. We're not telling you which streaming data library you should use, but there is one available. And to prove the point, pipes why, pipes is another streaming data library that's also available. And why web sockets you can probably guess that that lets you work with web sockets in why. Okay, this is the basics of the entire interface of why. We've got a data type called request. As a user of why, and I'm going to assume most people here are going to be users of why, you're never going to create this. You're going to be handed this by some kind of a backend. So warp will parse a request body and generate a request. The testing backend will generate a request based off of some information that you provided and so on. Response is the thing you're going to make. You're going to take the request, you're going to do something and you're going to generate a response. What's included in the response? The status code, the headers, and the body. And we'll show a few different ways that you can create those. We would love to have SimpleApp. SimpleApp, even for people who have never seen Haskell, I can explain it to them pretty easily. An application is something that takes a request. It does some IO and it gives you a response. It's a nice story. It sounds great and it doesn't work. The most complicated part of the talk is going to be understanding why SimpleApp is insufficient for writing a real application. You can write a lot of applications with SimpleApp, but you can't write some advanced ones with it. Since we want why to work for all use cases, we don't want it to be unopinionated and universal, we have to do something a little bit smarter than this. So therefore we do have this application type, which I'm going to get to and we will explain later. And finally, a middleware, and this is a really cool one. A middleware is something that just transforms an application. It takes the original application and it does something. So a logging middleware transforms an application into an application that also does logging. And we'll see how to write some of these later. Use them also. So what kind of fields exist in this request? Well, we've got the raw path info and the raw query string. These are byte strings. These are actual raw bytes that were sent over the network. We also have the parsed version of these. We've got path info and we've got query string. So this is parsed out and it's quite a bit easier to work with. It's usually the thing at your application or in your framework that you actually want to be working with. This covers the get parameters. So if you're talking like the PHP world where they talk about get and post, the get parameters are handled by the query string. Post parameters are not anywhere to be found in this request type. And the reason is in order to parse post parameters, you have to parse the request body of the HTTP request. And in order to do that, you have to do additional IO. You have to do additional parsing. And if you don't actually need that information, it's a waste of time. Also, it's a very opinionated way of doing the parsing. We're gonna check the request headers and then we're gonna make a decision. But maybe you have your own special application that has its own way of parsing these things. Why shouldn't it get in the way? So why extra provides some kind of functions which we're gonna use later to parse this, but it doesn't do the parsing for you. Who here has ever heard the term smart constructor in Haskell? So smart constructor in Haskell is saying we are going to hide away the real constructor, the real data constructor, so that we can make things more extensible in the future. And instead, we're gonna provide some functions that use those behind the scenes, that have access to the hidden stuff. This means that it's easier to extend things in the future and we've actually used that in the case of Y. When we added WebSock and support, we had to add a new kind of response raw, which I'm not gonna talk about here, but those kinds of things do play in. So, probably the simplest, most commonly used response constructor in Y is response LBS. LBS is a lazy byte string. Lazy means it can take up a bunch of memory and it can be generated on the fly. You should never use them but we use them here. Don't ask me later why I say you should never use them. That's a different topic. All you give it is the status, a list of the headers, and the body itself, and that generates a response. We also have this thing called response builder. And a builder is a byte string builder and it's this thing that will fill up buffers efficiently. And this is also where we get into this minimal overhead. Y under the surface is able to use these builders in order to specify the response bodies. And that means that when Y goes ahead and generates a response, it's able to go ahead and open up a, well this is warp actually, not Y. Warp is able to say open up a single buffer. It's able to write all of this different bit of information directly into that memory buffer without having to do additional memory buffer copies. And then with a single system call if your response is small enough, it's able to send the whole thing over the network. So that's a really nice optimization. Builders are a great thing. They're actually underused in Haskell as far as I'm concerned and this is the way we do a lot of the response building in Y. You can also generate a response from a file. Warp has built in support for this. Why do I say warp has built in support? We'll get to that in a sec. Anyone who's ever used the send file system call or heard of the send file system call, this is a cool optimization on most Unix systems, maybe all Unix systems, that allows you to bypass going through user space for copying data from a file directly over to a socket. And that's nice because we like to do things more efficiently and not do things in user space. But there are some cases where you're not able to use this. Let's say that you have work TLS. Let's say you're going to actually encrypt the data. You can't copy unencrypted data directly over to the network socket. You have to encrypt it first. So some backends do not actually have support for this. And those backends will have to go ahead and do some kind of a fallback themselves. You as the user of Y don't have to worry about that. That's the backends concerned. You get to just say, I want a file. Take the file, serve the file. Also, if you're using a web framework like you sowed, if you're using something like YAP static, which is a static file server written on top of warp, if you're using any of those things, it handles all this for you and therefore you're going to get this more efficient way of doing things right out of the box. Generating a non-streaming response is really easy. That's the response builder stuff that I showed you. You've gone out and you figured out all the data that you need and then you go ahead and you build up in memory or representation of what you want to serve. You take the entire thing, you pass it over to warp and you're done. That's nice, but that's not always the case. Let's say that you're going to be generating a CSV file and you're going to generate it by reading a million rows out of a database. You don't want to have to read all one million rows into memory, generate the response and then send it off. You'd rather be able to do this in a streaming fashion. And this is where the most complicated part of the talk is going to pop up. So everyone get ready, it's not that bad. Let's say that instead of reading the data out of the database we're going to read the CSV data out of the file. I'm just using a file instead of the database because it's easier to demonstrate this. And let's say that I have this simple app type that I told you guys about before and I've got simple run. A function which is able to take a port, a simple application and run a web server. I'm going to now run this thing, that's my main function. It is going to ignore the request body and it's going to open up this binary file in read mode, get the file handle. It's going to lazily read out everything from the file and then return a response body. There's a very big bug in this and it's a very subtle and hard to find bug. I'll give everyone 30 seconds or so if anyone wants to try to find it. Anyone have a guess? Okay. The problem, actually I'll show you the problem first. The problem is with binary file, open the file handle and when it's done, it closes the file handle. That file handle, once it's closed, cannot be read from. But this lazy read, this is lazy IO, it's evil, it's horrible, but we're using it to prove a point. This lazy IO that's going on where we read out the lazy byte string, that thing is assuming that the file is going to stay open until the lazy byte string has been fully processed. Where does the lazy byte string get processed here? It gets processed after with binary file completes. We're taking the whole byte string, we're putting it into a response and now we have this thing that's still hoping that the file is open and it's going to return it. This thing's going to generate an exception at runtime because it's going to try to read out of a file handle that's already been closed and that's the problem. In order to fix this, what we need to be able to do is we need to put the with binary file completely outside of the part of our application that's going to send the file, that's going to send the data over the network. In order to make that happen, what we do is we modify the application type so that now, instead of just receiving the request, it also receives this helper callback function called send and send is able to send the response body inside our with binary file. This is also known as a CPS transform if you want to get geeky, but I hate using those terms. So here's the two different things side by side. The bad version is on top and we have with binary file on the outside and we return a response body outside of with binary file which is still pointing to the file handle and then the file handle gets closed. In the second case, we open up the file handle, we generate the response body and we immediately send it before we close the file handle. So the second version is exactly what we want to have. The second version is the way that Y is actually designed and in fact, that's the more complicated type that we're now going to see. In order to make this thing more type friendly, to make it so that the compiler is able to help you write your code correctly, we have this thing called response received. When you send a response, when you send that call that send function, it gives you back a response received and every application is required to return a response received at the end. That way, there's some kind of a guarantee that you actually called send yourself. So now we have this type send which takes a response and gives you back a response received and our application is now just a little more complicated. It takes a request, it takes the send function and then it gives you back a response received. Warp and other backends use an internal module that provide this response received type. You're never gonna get to make one of these yourself. So if you actually follow the rules and you don't import this evil internal module, warp and y are gonna help you to make sure that you write your application correctly. So this is called continuation passing style and it means you stuck a function inside your function. I guess people don't like memes as much as I do. Okay. All right, that's the end of the hard part. Now we're gonna get into, let's just actually use this thing to write stuff. You can do routing in y really easily. You remember that path info that I showed you before? It's a list of texts. So here we go. We have, I've defined a helper function, okay helper, which is going to just send a response builder. And now I'm just using normal pattern matching. And this is going to serve slash when you, for a slash it's gonna return homepage. If you ask for slash foo, it's gonna give you slash foo, foobar works that way. And then we just generate a 404 not found response if it is anything else. If you look at this, you could squint a little bit and you could realize if you wanted to, you could do pattern matching on the query string just as easily as you could do pattern matching on the path info. You could do lookups. You could do whatever you want. You have a lot of flexibility in what you do because you're actually using the full power of Haskell at this point. Let's say that we wanna log. We want to log every incoming request and the outgoing response to our application. So I have my hello world application. I then have the logged version of hello world. All I've done is I've applied the function log std out on top of the function hello and that's it. We now have the ability to run the exact same application before except now we're gonna get log output to go along with it. The logging module request logger in yextra has a huge number of functions and configuration things because everyone needs to log their applications a different way or talk to different systems. So you can configure it a lot but the default, the way that I did it right here, I think it uses Apache style logging right out of the box. We can also just as easily write our own middleware. Harry Potter fans can pay attention to the words here. Okay, so we can also write our own middlewares. So I told you that a middleware is a function that takes an application and gives you back an application and an application is a function that takes a request and a send function. So a middleware turns into a function that takes an application, a request and a send function and then you can do whatever you want. So you can perform actions before the application runs. So I can, for example, print out I'm up to no good. I can modify the request so that the path info looks different than the original incoming request. I can take that modified request and pass that off to the application. I can then perform other actions afterwards and I can just return the output at the end like this. You can do lots of other things with middlewares. You can do basically whatever you want. Again, because it's just a normal Haskell function during normal Haskell things. Oh, also the last bullet there. You can also layer these as many layers deep as you want. So as you can see here, the logged hello is a middleware on top of an application but as far as the types are concerned, it's just an application. So if you took the gzip middleware and stuck it on top, that's fine. There's no problem at the type level at all. All right, let's try another example. Who here has ever done virtual host configuration in nginx or Apache? I do not ever remember how to do that. Every single time I need to configure virtual hosts, I go, I think I wrote a chapter in the USODE book on it and I think that's where I go and I copy the example I wrote last time. I can't remember them. It's not that difficult to do it and why though? Because the request headers are just data. Inside the request value, you can do use lookup and then you can go ahead and you can do whatever you want based off of the virtual host. So in this case, I'm either telling someone what the host is that they requested or I'm telling them there was no host header present. But if you wanna get real, what you're probably gonna wanna do is have different applications on different domain names and you can do that just as easily. So what we're doing here is we're looking up the host and now we're pattern matching because we love pattern matching in Haskell. So we're going to say for example.com, we wanna serve app one and the way you serve this application is you pass in the request and the send function and that's it. Same thing with app two and example.org. And then I generate two different error responses if the host either is in present or if it's a host that we don't know anything about. And if anyone wants to be really clever and play around with things, you could actually kind of like do a little bit of middleware tweaking at the same time that you're doing this virtual host dispatch. At the same time. Just grabbing another water. Okay. I told you that we've got a bunch of batteries available, not included. So there are some ready to go applications ready for you to use from the Haskell ecosystem. And the one that I personally probably use the most is Yapp static, which serves a static, it does static file serving. So I import that. I use default file server settings. That has to do with how it generates cash headers, how often it checks whether the files in the file system have changed. You can go read the documentation for all the gory details of all the different settings. Like most things, there's a lot of configuration available, but out of the box, what it's going to do here is it's going to serve a web application that shows all of the files that are present in the content directory. And unsurprisingly, I'm able to just throw the log middleware on top of that. And I could throw any other middlewares on top of that as well. One of the other common ones that we like to use is throwing an authentication middleware on top. So for example, you only want people from your organization with a Google account for your G Suite account able to log in. Go ahead and get credentials from Google for an OAuth application, configure Y middleware OAuth, pass in the secret keys, and boom, you've got an authenticated file server that you're able to use. So how do you deploy a Y application? Basically, you can deploy it however you want. Haskell applications compile down to an executable and then you're going to do something with the executable. The most common way that people deploy their applications is to use warp as the backend and then to reverse proxy into that warp server from some kind of front end. So Nginx, Kubernetes, or like an AWS load balancer. In my experience, most of the time people do TLS termination at the load balancer. They don't bother having warp or the Haskell application itself handle TLS certificates, although you can. There is a warp TLS package that you're able to use. It's also possible to completely bypass that reverse proxy and spin up a virtual host, a virtual machine, stick a application right on there and just serve the port directly. All those things are possible, whatever you wanna do. Just so happens that the load balancing approach is the one that I see by far the most often. If you wanna have a lot of fun, you could go ahead and pull out that CGI backend and you can dust off your old Apache and you can serve it that way. I don't recommend it, but you can. And then often, so you could also, these executables, you could compile it on your local machine, you can upload it, or, and this is where at work, the way we almost always do things, you take your executable, you build it on CI, you have CI package up a Docker image and you deploy the Docker image. So I'm gonna give you guys an example of what some of these scripts look like. I'm gonna give you the caveat that the example I'm giving you predates our usage of a load balancer and a few other things. So it's making this overly complicated. It makes it a good example, but I'm not actually saying you should be this complicated in practice. Who has ever heard of historical reasons? Historical reasons. So it looks like this for historical reasons. All right. So I have an application which, it's available on GitHub and we build it on FDComplete's GitLab instance and deploy it to our Kubernetes cluster. It has two different applications in it. It's got storming.com and you showed web.com. This is the hysterical reasons part at this point with the setup with Kubernetes. At this point, I would just have two different deployments into Kubernetes and I wouldn't bother with this weird virtual hosting inside the web application itself, but this is what we've got. So then I have this third application that sits in front of them and does all the virtual host parsing and then uses something called HTTP reverse proxy to reverse proxy into the appropriate application. And yes, this is built on top of Y as well. It ties directly into warp and you're able to use this if you have some other reason for doing a reverse proxy. The reverse proxy launches these two other applications storming.com and you showed web.com and then keeps the other two running. If they dive and the whole thing goes down and GitLab CI builds all three and packages it into a Docker image. And this is what it looks like to build this thing in Docker. So I go ahead and I create this artifacts directory where all the static files, all the configuration files, everything else that's gonna be needed by these applications gets dumped in there. I then use stack, the Haskell build tool that I use. I use that to build the executables and dump all of those into that artifacts binary directory as well. Finally, I actually copy in all those static things that I told you about. And then we use Docker to build the image. The Docker file looks like this. We use, fpcode PID1 is a, it's a nice base image that takes care of the PID1 problem in Docker. If anyone is curious about what a PID1 problem is, talk to me afterwards that has nothing to do with Haskell at all. It has to do with Unix and Docker being really stupid. Not Linux being stupid, Docker being stupid. And this is the entirety of the Docker file, or maybe the entirety. Maybe there are a few other things I left out, but that's basically it. It provides the additional tools that we need. We need get at runtime to install those, copies over the artifacts, and boom, we've got a Docker image and we're able to run that. GitLab CI is able to build this image by calling the build Docker.sh and then tagging the image appropriately. And then we're able to deploy this to our Kubernetes cluster by telling Kubernetes to pull in the new image name. What about exception handling? I talked a lot about the interface for why at the type level, but Haskell has runtime exceptions. What is the expectation around exceptions? So the basic idea is applications should not throw exceptions. And you might say, that's crazy. What do you do if you have, what if something goes wrong? Well, it's not the handler's responsibility to take care of it. Warp doesn't know what you want to do when something goes wrong. It doesn't know how to generate a nice error message to tell users. It doesn't know how to log information. It doesn't know how to send an alert. It doesn't know how to send a page to your boss at 3 a.m. to wake him up so that he hates you forever. Doesn't know any of that stuff. You've got to do that yourself. So the recommended approach, the correctors approach in why is inside your application, if there's any chance that you're going to generate an exception, you've got to catch all exceptions, do the logging, do all those other things. If you don't do this, not only do you get ugly error messages, it breaks all the middlewares. All middlewares assume that your application will not throw an exception. They're not designed to recover from exceptions and then do something intelligent because they don't know what they're supposed to do either. You sowed the web framework I wrote that I mentioned earlier. It does a lot of this exception, catching and handling and massaging for you. Things got actually really crazy in Haskell because we have these lazy values that can also throw exceptions. So you sow takes care of all of that for you. And also make sure that you are async exception safe. That's an entirely different talk that I'm not going to give right now, but I have this link available if anyone wants to look at it from the slides later. All right, I'm going to tie off today by stepping through a sample of a JSON service using Y and then I think we'll have some good time for questions because it's not going to take that long. So we're going to step through this. I'm going to explain it and yeah, that's it. So the JSON service we're going to put together is a very simple mapping of names to age. And it's going to keep track of this in memory. It's not going to use a database or anything. We want to put in the service, we want the ability to get a list of all of the names that are currently in the database in memory. We want to query information about an individual. We want to find out how old a specific person is and we want to be able to add new people to this and we're going to have two different API endpoints to do this. One is going to be a put request using the query string to set the age and one is going to be a post request using the request body and post parameters. If anyone's wondering what's the advantage of having two different endpoints to add the same data to this store, the advantage is I get to show you two different ways of adding things to the store. Oh yeah, also when you get the slides, if you want to look, the full code is available on GitHub. Okay, so the API is going to have get slash people. It's going to return a JSON array of names. Post slash people, you give it the URL encoded body and it requires name and age parameters. Get slash person slash name, whatever the name is, we'll return a JSON object and put slash person slash name, question mark query string with age equals age. Those are it. In order to make it easier to read the type signatures, I'm going to define some type synonyms now. So I'm going to say that a name is a piece of text and age is an int and then we could get into, should we use a word, should we use unsigned, should we use son, whatever, we're using an int because I felt like it. We're going to have a people map. The people map is a mapping, a dictionary from someone's name to somebody's age. And if someone says, well, what if two people have the same name? This is a crappy application. Don't ask me difficult questions. And people var. People var is going to use a mutable variable, a T var, which comes from STM. Who has ever heard of STM? STM is software transactional memory and it's one of Haskell superpowers and you should definitely use it. It's awesome. All right, so we're going to have this mutable variable that's going to hold onto the map so that we're able to access updated versions of it and modify it when appropriate. We're also going to define some helper responses. And if it seems strange that for something so simple we're already having to jump in and find all of these helpers, now you're beginning to understand why I tell people not to use Y and to use a web framework instead. Because this is a lot of bookkeeping, a lot of manual stuff that you have to do. Just let a web framework handle it for you. Okay, so a not found response is a builder with status 404 and not found. And we could say maybe we should have a specific response type, content type or anything. Whatever, we're doing very simple stuff here. Bad request is a 405 response that you return when someone uses the wrong request method. It's not used as often as a 404 but 405 is also an important part of HTTP. And here is a helper function which is going to use the ASON library. ASON is the default standard library that most people use for JSON in the Haskell world. And you give me any kind of value that can be converted into JSON and I will give you a JSON response body. All right, so our router, the thing that's going to dispatch. By the way, these lines in the middle, the blank black line, it's just a formatting issue. You can ignore it. There's no deep mystery that anyone has to figure out. Okay, so my people application is going to be fed in a people variable and return an application. This is right off the bat an important aspect of how we do things. Haskell is all about functions and closures and capturing variables. When you, we're going to later on create this variable in our main function. Our application is predicated on having a variable available. The way that you provide the static confirmation isn't with some kind of mutable global variable or anything like that. You just pass it in as a function parameter. So we've got people app. We take people var and request and send because that's the way you make an application. Now I am going to pattern match on the path info. And this, these entries here correlate directly to the four endpoints that I told you about before. So if it's slash people, the path info list is going to have a single entry in it with the word people. There I'm able to go out and pattern match again on the request method. And if it's going to be get, I'll use get people response. And if it's post, I'm going to use post people response. And I'm passing in the request to post people response because I actually have to parse more information out of the request. I don't need to do that when I'm getting people. And then if it's neither one of those methods, I'm going to return a bad request, the 405 response. If I get slash person slash name, I'm able to capture it like that. And notice there's no, like, special syntax in order to do variable capture. Again, it's just a variable pattern wild card the same way you would do anywhere else in pattern matching in Haskell. Same thing with the request method, get and put. We are going to look up the age from the query string in the put case because it makes the pattern matching that we're going to do later a little bit easier to look at. And if it's on the slide nicer and the number one problem I always have with giving talks is how do you fit everything you want on a slide? If neither of these routes match, then we return a not found. And at the end, we send the response. All right. So our two getter endpoints. Get people response. I'm going to take in a people var and it's going to return an IO response. So it's going to do some IO in order to generate the response body. We have this mutable variable. We just want to know what's in it right now. So we atomically read T var. Atomically is part of STM. You can ignore it for the moment because we're not actually doing anything transactional but that's what it means here. And we get this mapping of all the people. The map from names to ages. Remember our API is supposed to just return the array of names. So we're going to use map.keys to get the keys from this map, turn that into a JSON response and we're done. Get person response is going to take in the people variable again and the name that we're looking up. It's going to get the people out of the variable again. It's going to look up the name. It will return a not found a 404 if the name isn't found and otherwise it will construct a JSON response body. All this stuff about object and the dot equals operator, that all comes from ASON. I'm not giving a talk on ASON today so I'm not going to explain it to you. But you can probably guess what it does. Setting via put. So this is where things get a little weird. Remember that we said that Y needs to be as flexible as possible. It needs to be unopinionated. Well there's three different things that can happen when I pass in a query string parameter called name. Either I don't have the name parameter at all. So it doesn't appear. Another thing that I can have is question mark name with no equal sign. And that doesn't pop up that often but there are actually some web applications out there. I think the AWS web services is an example of this that treats that as different from saying question mark name equals nothing. Those are two different cases. So in Y we actually treat these as differently which is why you end up with having a maybe maybe byte string. If nothing appeared at all, the lookup is gonna fail and then we're gonna get the first nothing case. But if the look, if the name is there but it simply doesn't have an equal sign after it. It doesn't have any kind of a value associated with it. The lookup is gonna succeed so you get a just and then you get a nothing inside of it because there's nothing there. But we actually need to have a proper name parameter, sorry, a proper age parameter. So if that's the case, if I actually have a byte string I'm going to read it as a decimal I'm gonna try to parse it and assuming that the parse is successful I'm going to modify my variable, my people var by inserting the new name and age and then I'm gonna generate a 201 response. Otherwise I'm gonna say that I wasn't able to parse this as an int and I'm gonna generate a 400 response. We also have the post parameters and this is where we have to parse the actual request body. And this is where we pull in those batteries from y extra. We call parse request body. We use the LBS backend. LBS is the lazy byte string backend and this means that if there were any files submitted as part of the request body read them into memory. This is exceedingly dangerous. This is super dangerous. Someone could submit to you a file that's one terabyte in size and you will crash any reasonably sized machine that you have available trying to read this into memory. If you use something like USODE USODE actually has a lot of different protections built in to stop this from happening. It has a limit on the request body size. I think it's two megabytes by default that you can upgrade. And also for anything that's greater than 50,000 bytes, 50K it will automatically start writing it to a temporary file instead of reading it into memory. Again, if you use Y directly if you have to make sure that you're protecting yourself against all these things which is why I say use a web framework instead. So we're going to go ahead and parse the request body. We're going to get the list of parameters and then we're going to look at the name, UTFD code it, look at the age, parse it as an int and assuming all of that works then we do the exact same atomically modified T-VAR that we had previously in order to update the map. Finally, at the end of all of that we have our main function. We create a new empty mapping stick it into a T-VAR and we serve the application. People app, people var is an application. We then wrap it with the auto head middleware. Auto head automatically generates responses to head requests by using get requests. If you're familiar with head requests that makes sense. If not, then I'll tell you afterwards. And like I told you you can layer as many of these middlewares as you want. So let's do logging also. And then we run this thing on port 8,000. If you go to the GitHub link that I gave earlier in the slides you can also look at a test script. The test script demonstrates this thing actually working so you can have some fun and run it on your local machine. All right, so takeaways. It is perfectly doable to write an application directly in Y. But there's a lot of manual plumbing and like I sound like a broken record use a web framework so it handles these things for you. All right, so in summary. Y is a low level interface. It's the basis for a lot of different frameworks and a lot of different applications. There are lots of common utilities. I've only scratched the surface on what's available in the Haskell ecosystem. There's a lot of stuff available to handle most of the common use cases that you're gonna run into with Y. And it's designed in a way that it's easy to plumb these things together. I would actually claim that Haskell makes it really easy to be able to plumb these things together because it makes it so natural to be able to compose different things like that. This is probably not the interface you're going to be using on a daily basis. You're probably gonna be using you-so-its-servants, Scotty, something else instead. But if for whatever reason you have to go down to this level, don't be afraid of it, embrace it, use it, it's fine. And that's it, so thank you everyone. So the question was what are the common use cases where you would need to go down to that level? Okay, the most common use cases, and this is gonna sound silly, the most common use cases where you have to go down to this level are when you're writing a web framework. Outside of that, you probably don't need to do this. Theoretically, if you're gonna be writing something, like, you know, you're gonna be writing a website and you wanna work directly with multiple frameworks, there's another example. If you wanna write a middleware and you want to be able to work with multiple different frameworks, that's an example. If you have an application written, this is a real thing you can do. Let's say you have a servant application and you want to go ahead and serve that as part of a bigger USOD application. You'd go ahead and you'd take the servant application, convert it into a Y application, and then you would serve it from inside USOD using some of those pieces of functionality. So you can drop down to that interface in those kinds of cases. There are also USOD has overhead. I assume servant has some overhead also because it's doing extra things. If you've got a use case or if you just wanna, you know, put together some benchmarks to prove how fast you actually are, maybe you're gonna wanna do that as well. So for normal day-to-day applications, let's say I go to office and I need to start working on a web app for the first time. I should use the mature frameworks, maybe like ScalaTra or Spring Boot. I would definitely say use a mature framework. You're going to, not only are you going to bypass a lot of paper cuts and possible security bugs, you're gonna skip ahead from all of these bits of plumbing that you're gonna have to figure out. A problem that we have often, and it keeps coming up in the Haskell community, everyone really likes to reinvent wheels. So a lot of people say, no, I can't be bothered with a framework. It's gonna take me like 10 minutes to learn the framework and I gotta get started right now. And then they spend two days, three days, five months, years, generating, what is essentially turns into their own framework that they're doing for this application. I've seen it happen multiple times. Right, one more question. So there was a line written over here which says PD-1 provides zombie prevention. So what kind of zombies are we talking about? Are we talking about that straight Java processes that form during deployment because that's the only experience I have personally of zombies? So are we talking about that? So I am not talking about real life zombies, that's for sure. So the zombies, that's talking, that was the PD-1 thing since we have a little bit of extra time. Zombies are processes in Unix where the parent process is not raping them. Every time a process dies, the parent process is required to call waitpid. That's what you're talking about, right? Yeah, so the problem is that if a parent, if a child gets orphaned, if the parent no longer exists. So parent dies, these are horrible terms but that's the actual terms that everyone uses. So if a child gets orphaned, parent's gone, then the way that Unix works is the init process, the PD-1 process on the system is required to do the reaping. Now what does this have to do with Docker? Docker, inside of a Docker image, if you ever run PS, you'll see that whatever thing you ran first is PD-1 and that's just the way that the C groups under the surface work. Docker does nothing to help you with this. So let's say that you run Bash as your PD-1. Well, Bash is not designed to reap zombies. So you start running these things, you start getting these zombie processes, you've got these things piling up, nothing's gonna clear them out. That's a problem for a long running application. Also, PD-1 is special in another way, it responds, I don't remember if it's sig-inter, sig-term, it responds to one of them differently than every other process and therefore, oh right, so control C doesn't work. And that confuses people a lot as well. So we use PD-1 as the base image for a lot of things that have to be complete. I like using it because it provides an entry point in Docker that provides a PD-1 process that sits there and does the reaping and handles control C correct.