 I present to you a project I've been working on, Expressions. So a little bit of history on this project. Back about a year ago, I was working at a company called White Pages, and I got the opportunity to work on the redesign of this framework we call the plan execution framework. Basically, we received a request and then we needed to send a whole bunch of requests to like back-end services in order to build a response for the user. And obviously, we do this sort of asynchronously. And this old framework, it had a bunch of problems with it, so I wasn't really maintainable, so I sort of jumped in to redesign it. The problem is it was untyped. The idea was to have just like a declarative way of specifying how to resolve a request. You just like tell it what needs to happen, and it sort of does it for you. But in practice, you'd give it like dependencies. So like in order for this activity to run, you need these activities. Basically, it was a little bit of a nightmare. And so when I came in, I really pushed hard to figure out, what is this framework doing for us? Why do we need this complicated thing to do asynchronous code? And so I pushed hard to say, in Scala we have this nice abstraction called Futures. They're easily composable. So let's try and use that. And I ended up being sort of right. We did end up being able to get rid of that framework and just use Futures. But there were a few things with Futures that we weren't able to get that the framework was sort of giving to us. And by far the biggest thing is Futures that fail fast. And what do I mean by Futures that fail fast? So if we look at the code here, we have three Futures. The first one is sleeping for one second and then returning a valid value. The second one is sleeping for five seconds, returning a valid value. And the third one is sleeping for three seconds and then failing. And so a typical way to use Futures is to use for comprehensions. And so if we do this for comprehension and you run this on your computer, you'll notice that this actually does not fail after three seconds. It's gonna fail after five seconds. And this was something that the framework we were using was doing for us, failing fast and it was an absolute requirement that Futures needed to fail fast. So I was sort of disappointed. I didn't expect this out of Scala at the time. And so basically spent a whole lot of time trying to figure out why it is that this is not failing fast. So by the way an interesting thing is that I'm gonna explain why fundamentally it's not possible to fail fast with for comprehensions but using vanilla Futures even without for comprehension sort of directly writing maps or flat maps, it actually doesn't fail fast either. But there's no fundamental reason why it can't fail fast. It's just the current implementation doesn't do it. And as an interesting note, Twitter Futures do fail fast when you're using map and join. So but why is it that I'm saying fundamentally in a for comprehension you cannot fail fast? So for those of you that are savvy Scala programmers you know that a for comprehension actually desugars to calls to flat map and map. So this is actually the resulting expression from the previous for comprehension. And so okay, why you said that for comprehensions couldn't fundamentally fail fast. What do you mean by that? So if we have a look at the signature to flat map we'll notice that flat map is you know you give a lambda to flat map, a lambda from A to future of B and it's gonna produce you a future of B. So if you're the person that's in charge of implementing flat map, you don't have access to future of B until future of A is completed and giving you the A. At that point you can call into the user function and get future of B. So fundamentally it's gonna be impossible for you to inspect future of B to see if it failed already before A completes. Is that clear, any questions on that? Okay, and so if we look at another function let's say zip. So zip takes a, well it's defined on future A. It also takes a future B produces a future of AB. In this case we have access to future B so it's possible for us to fail fast. So we could rewrite the code we saw previously with zip and now this behaves like we want it to behave. Well actually in vanilla Scala it doesn't but it could behave like we wanted it to behave. So as a side note some of you might be wondering well there's a pretty cool notation out there called asynchro weight that's based on this language features from C sharp that allow you to deal with futures. Does that fail fast? Does that give us the properties we're interested in? And it turns out it doesn't and the explanation for that I don't know. Async is a bit of a complicated beast and as far as I'm concerned a big black box but I've tested it out and it definitely doesn't work. So we had a fairly limited amount of time to solve this problem. We did end up solving it. It wasn't like the perfect solution. Well basically no, I mean it worked. We had futures that were failing fast. I give a talk at Pacific Northwest Scala on how we solved it making heavy use of shapeless and Scalasy. But I wasn't satisfied with our solution. It was the best solution we could come up with in the time we had but I wasn't satisfied. And the reason for that is I believe, I strongly believe that for conference is a nice syntax. It's like a higher level syntax that allows you to reason about asynchronous code almost as if it wasn't asynchronous. You sort of like extract it from the context and then you can deal with it as if it wasn't a future and then it's just magically as a future. Well it's not magic, it's very formal but you know it's a higher level abstraction. So it was annoying to me that I couldn't use for comprehension to do this. I needed to like directly use sequence and flat maps and all that. So at that point I started working on a project that I'm tentatively calling expressions and I'm now calling it an alternative to for comprehensions. So what's a high level view of the features for expressions? So the first thing is it uses the least powerful interface and I'm gonna go into more detail on that but that basically means that we can fail fast using this notation. Second of all it plays well with if and match statements. If you go on the Scala, Async, GitHub page you'll see that's one of the strong arguments is for comprehensions to sort of start to break down when you have a bunch of if and match statements they weren't really designed to interoperate with them. The third point is that it's a unified notation. So like for comprehensions, Async await only works with future. This notation that I'm doing, it works with Scala future, it works with Scala as a task, it works with option, it works with any monad basically. And finally it's customizable. So if you want it to fail fast, you can provide an instance of a type class to do that. If you don't want it to fail fast, you can provide an instance that doesn't fail fast. You can sort of customize it the way you want. So let's just dive into a few examples to see what it looks like. Oh, wait a minute. So failing fast. Here we have the same sort of situation we had before. We have three futures. I shortened the notation here. The first one's waiting for five seconds. The second one's failing after one second. And the third one's waiting for three seconds. And so we have our good old four comprehension as the first example. So this is gonna wait five, this is gonna wait five seconds. Actually it shouldn't fail there. There, that's a mistake. It's gonna fail after five seconds. Because it needs to wait after the first one. And then only after the first one is completed is it gonna know, oh, the second one failed. Whereas opposed to expression at the bottom, not only is it more succinct. Although, so it's configurable. You can either use implicit extraction here or explicit. You can make it, some people might be scared. Oh, A is sort of implicitly becoming, being extracted into what's inside the context. But you can use explicit extraction. I just wanted to keep the code succinct. But the important point is by using expression here, we're failing at one second. And it's possible to do that. And sorry, this slide's a little bit loaded, but it's not too complicated. We can go through it. So interacting with if, we have the same situation here again. Well, like a similar situation. Actually in this case, none of them are failing. They're all just waiting a second, five seconds, two seconds. And in the first case, we're using our four comprehension. Is that big enough? Yeah, okay. And what happens here is, let's say A is equal to something. Then we're calling this Polish function on B, or BB, which is B. And if it's not equal to something, we're calling Polish on C. And so what happens here is, if we're using a four comprehension, we need to wait after A, B, and C. That's just how it works. But to me, and I think to most people that are doing, just run of the mill, asynchronous code, if A completes before the two other ones and it ends up being either equal to something or not equal to something, at that point we can sort of just ignore B or ignore C, right? We don't really care about the result anymore. We're not gonna use it. And so unfortunately, if we're using vanilla four comprehensions, in both cases here, we need to wait five seconds. That's just how it works. So in the second case, we're still using vanilla four comprehensions, but what we're doing is that we're actually nesting the four comprehensions and this is gonna give us the behavior we want. And at the end, we need to flat map identity, which would be equivalent to flattened but I don't think they're defined on future, which is just to say, we end up with a future of a future of something because we nested them and we need to flatten that into a single future. But we can get rid of all this notation and still get the property we want by using expression here at the bottom, which is just gonna sort of automagically do the right thing. I say automagic because it looks automagic and you can use it like it's automagic, but it's actually quite formal what it does and it's easy to reason about why it behaves the way it does. Any questions up until now? Okay. And then another example is using another abstraction. So here I have a whole bunch of abstractions, option, error, writer, task, IO, list. And so list is an interesting one because list is not typically considered like a monad. So it would be a little bit weird to use lists here as if it was a monad. However, you can use this notation as well for applicatives. So list can be considered an applicative and the intuition for that is that it's like an undeterministic computation. So basically, if you have three values and you're doing plus another list with three values, you end up with all the, well, depending on how you instantiate your type class, either the combination of all the pluses, of all the combinations, or just the zip list instantiation. But it can be useful if you're working with errors or stuff like that to just like write a computation as if it was like one value, but it's actually like lists of value and you end up with like a list at the end, which is the computation over the whole list. But the important point here is that no matter what the abstraction, you can use this notation to sort of like peel off the context, deal with the things inside as if they were just the things and then you get it in the context at the end. So how does it work? So it's based on Scalasy. So Scalasy defines a hierarchy of type classes. So there's functor, there's apply, applicative, monad, which are essentially based on the type classes you'd find in Haskell. So functor has a map. We don't really use functor all that much. Apply is what's really interesting here. So if you notice the signature apply to is actually very similar to zip. It's basically zip and apply to is going to enable us to fail fast. And then you can derive map from apply to applicative in this, in this, in Scalasy is just extending apply and adding point. Whereas in Haskell applicative is actually both point because like there's no apply in the anyway, not super important. And then monad extends applicative and provides bind and bind is basically our flat map, which cannot fail fast. And you can derive apply to from bind, but if you do that, then you won't be able to fail fast. You're losing flexibility in the implementation of your function. And so what's really cool with this notation is that it's implemented in pretty straightforward way. So here we have just like a basic code expression and we're calling foo. And then we're extracting A and C because A and C are futures. And we have this foo function that's defined over to strings. And what that actually just translates to, translates to is a call to applicative.apply to AC and then we stick in the foo and underscore underscore. So in this case, actually writing it out like manually actually seems to be simpler than using expression. You start getting gains when the expressions are longer. But what's really interesting is this is, there's no like magic going on. It's just rewriting things into stuff that you could write manually. And so here I just want to talk a little bit about the importance of using the least powerful interface. So in this case, we could have used bind or flat map in order to implement this because bind is a more powerful function than apply. You can express anything with bind. But like I said, bind, you can't fail fast. So it's a trade-off here. If you're using bind, there's more things you can express as a user but as the implementer of bind, you have less flexibility. When you're using apply, there's less things you can express on the user side to use apply but as an implementer of the apply function, you have more flexibility. And that's just like, that's a trade-off. So the idea of this notation is to use the least powerful interface that's necessary to translate the sugar code into de-sugar code, allowing maximum flexibility for people implementing the instances of your type classes to provide the glue code, the functionality that they want to provide. So here's another example. In this one, monad is required because we have this foo function operating over strings. We have a bar function that operates over a string and returns a future of string. So we end up having to extract a to pass it to bar and then we need to extract again the result of bar and foo. And so there's no way you can do this with apply. So in this case, the notation is smart enough to detect, if you will, that it needs to use monad and it's going to use bind in order to implement this. And then here's an example with the if statement just to show how simple the transformation is. So we have if extract A, extract B, else extract C. This is like super simple and it just translate to bind over A and then if A is equal to something, then produce B, LC. There's also the case of a match statement, sort of very similar to the if statement. The translation is pretty straightforward. It does get sort of interesting when match statements get a little bit more complicated. Here's an interesting case. So expression supports blocks. So here we have a block and then we're extracting foo, which is a future, assigning it to foo y. So now we can sort of imagine that foo y is an actual string. And we're extracting bar matching over that. And then I don't know how many of you are familiar with this notation. I actually think I learned it like not that long ago. I'm stable identifiers in pattern matches. So this foo y, instead of binding bar to foo y, we're actually checking if bar is equal to foo y. And you do that by putting back ticks. Because by default, if you just put an identifier there, it's going to bind to that value. It's not going to compare based on the value in scope. Even if foo y is defined up above. Anyway, so in this case, we want to check that they're the same. If they're the same, we extract b, else extract c. And here, we're assuming that it's either going to be foo y or two. And the translation there is a little bit small, and you probably can't read it all that much. But it's basically because it starts to get a little bit more hairy when you need to do these things. Yeah, anyway, I'm a little short on time, so I won't go too much in details. Feel free to ask questions later. And it can get really hairy in this case, where you have two of them. And here's where you start seeing the advantage of expression. Clearly, it's easy. Well, I believe it's easier to reason about the above code than the equivalent manual code that you'd have to write yourself in order to get the desired behavior. And as a side note, match statements are definitely, as the implementer of this expression macro, match statements are the hardest thing to reason about. They're definitely where all the complicated cases are. So there are some similar projects out there, which is comforting, because I'm not going down this rabbit hole, hopefully. So Effectful is a really interesting one. Its stated goal is to generalize Scala async await. So in a way, it's very similar to my project. Unfortunately, it does not use the least powerful interface. So basically, like for Comprehensions, it's always going to use bind or flat map, because you can't do that, but you can't fail fast. And that was my main motivation at the beginning. Possibly, if I knew about Effectful when I started working on my project, I would have just figured out if there was a way to do this in Effectful. But when I discovered Effectful, I was pretty far along, so I decided I had more facility working on my own code base. Scala Workflow is a really interesting one. It's been there for a while. It's unmaintained now. It's super featureful. It does all of this and a bunch more nested abstractions, like manipulating the context from within the notation, all this really cool stuff. But it's based on untyped macros, which are now sort of deprecated in Scala. And so it re-implements, because it's based on untyped macros and it receives an untyped AST, it actually has to do like a ton more work. And I'm not convinced that all that extra work is necessary. And because it has to do so much work to provide the same functionality, it actually supports a very limited subset of Scala, because it's so complicated. I was actually having a conversation with Eugene the other day, and he knows the guy behind the project. And he was saying that he ended up having to almost re-implement scoping in Scala within his macro, which is a little bit crazy. And then there's Async Await. Async Await seems to be, a lot of people know about it. Based on C-Sharp, so like C-Sharp has this cool thing, but it's sort of baked into language, which is unfortunate. So Async Await tries to provide that in Scala with macros. And there's actually a SIP to include it in the standard library or something. I'm actually a little bit scared of that, because I looked at the implementation and I don't know. It only works with Scala futures. It doesn't work with Scala Z futures. It doesn't work with Twitter futures. It doesn't work with any of the other abstractions. It doesn't fail fast like I don't really see what it's going, what it has going for it really. One could argue that because it's specialized for this one implementation of futures, it could exhibit superior runtime performance theoretically, but I don't know if that's actually the case. So in terms of my own projects, what are some of the known limitations and what are things that are known to work? So I have a pretty extensive test suite. This is like a really fun project to test actually. You can write specs that are super, I mean Scala check properties that are really generic. So function application is known to work pretty well. If else statements, function currying, surprisingly enough, I actually spent time on that string interpolation blocks and basic match statements. And where I know very well that starts to break down currently is in sort of, well pattern matching and value definitions because that's actually sugar and then the Scala compiler is going to transform into a match statement and complicated match statements tend to not work all the time because I just haven't had the time and also because Scala macros in their current form are a little bit hard to reason about and sometimes just really hard to work with. So when should you use expressions? By the way, any questions? Okay, so just to, yeah. So that's like a separate problem. It's a really interesting problem. So Twitter futures, you can cancel them. The problem is stopping a thread on the GVM is deprecated. It used to exist and they weren't able to get it right so they deprecated it. So the idea of stopping a thread is not really possible, although maybe, I think recently in Java 8, they might have started revisiting that. So what Twitter futures are gonna do is whenever you flat map, they're gonna insert their logic within the flat map to say, oh, okay, we flat mapped and this bullion flag was changed to true, so let's not execute the continuation. But if you're sticking a database access or some long computation within a future and you're never flat mapping over that, there's just no way you can be able to cancel it and that's like a GVM limitation. So unfortunately, this doesn't address that anymore then. It would be some possibility, but I mean, first we cancel it. Because you can't take out. The problem is in the GVM right now, unless you're gonna just put a whole bunch of bullion checks within the thread that's executing that stuff, you can't kill it from the outside. Oh yeah, sure, yeah, some other thing. This is just a GVM limitation. Yeah, some other thing, of course. If the runtime allows you to cancel a thread, then you could totally expose that through a future-esque API for sure, yeah. Okay, so just, I'm running out of time, but give an intuition on when you'd wanna use this project. So if you're writing, and this is a really strong use case, just a whole bunch of asynchronous code and you just have a whole bunch of futures and you just wanna compose them together in a sensible way and not have to be boggled down with the details of how you combine these things, then it's great. Like here in this case, we're calling these services lookup phone, lookup address, lookup reputation. We don't even need to think about the dependencies. In this case, lookup address depends on phone, so we need the result of phone obviously, but lookup reputation doesn't depend on address. It also depends on phone. So by using expression, it's sort of like, you can think of it as like it's gonna analyze your dependencies and use the right things and just gonna work in the most sensible way you'd want it to work. And another interesting use case is when you have like a large bunch of code, like here's using raptor JSON. And anytime you sort of like grab a JSON value using raptor JSON, it's returned as an option because it might not exist. So you have like all these options here, like so many options and you just like wrap the whole thing in expressions and you've got a match statement and if statements it just looks like normal code and in the end just returns you an option of the thing. Just quickly, I wanna mention it's not a replacement for four-compensions and I have a really good example here. So I work on remotely at Verizon, you should check it out, it's really cool RPC system in Scala. And in our code base, we have this function time which takes a task which is like the equivalent Scala is your future and it returns a task that has the same A, but now has the duration on how much time it actually took to execute the task. And in this case our implementation we use four-compensions, so we're calling test.delay, putting the current time, assign that to one and then task is assigning to A and then the current milliseconds. And because we're using four-compensions and four-compensions use flat map we can be assured that we're gonna check the time then we're gonna execute task and then when we're done we're gonna check the time again and two minus two one is actually gonna be the real time. But let's say you were to use expressions here, expressions even though it's a formalized thing you can think of it as it's gonna be like, oh man, you're not even using A in order to call task.delay so I can paralyze that, I can call task.delay right away and then you're not gonna get the desired result. So the way I think of it is like four-compensions is like more low level thing. If you actually need to sequence your things in a very precise way then you'd use four-compensions but if you just wanna stick together things in like the normal way then expressions is like a higher level abstraction. The future work is like to use Scalameta. There's a whole bunch of things, feel free to if you're interested come and see me. I think I'm going over time here.