 The second edition came out two years later in December, 2010, and it covers color 2008. We would hope that there would be a new edition in December, 2012, and then one now in December, 2014. But unfortunately, no new editions of the book were published. I met Bill Werner three months ago in San Francisco, and I asked him about the third edition. He told me that they have plans, but they just don't have time. But it's long overdue a new edition of the book because of all the developments that have happened in Scala in the last four years. We had three major versions. So just a quick overview of Scala timeline. The first release of Scala were in 2003. Then in 2004, we hit Scala 1.0. A few more releases. Then in 2006, Scala 2.0 was released, and it was a very important release because Scala C was now written in Scala, so we hit the dog food landmark in 2006. More releases in 2007, and Lyft, the web framework, was released in 2007. So it's amazing how a project as popular and as big as Lyft is so old. As far as I know, it's the oldest open source Scala project in 2007. We had 2008, we had Scala 2.7, and finally in 2010, 2008, which was a pretty big release. And also in 2010, Play 1.1 had support for Scala via plug-in. You could use Scala with Play. The ACA project was released in 2010. 2011, we had Scala 2.9, and the creation of Typesafe, the company. Finally, in 2012, we have Play 2.0, which was rewritten in Scala, so now it was native. Was not just a plug-in for a Java framework anymore. Now it was Java was the second class, it's not Scala. 2013, Scala 2.10, which was a huge release, I believe even bigger than 2008. Just here, we had Scala 2.11, and it's not here, it was supposed to be there. Oh, yeah, it is. So next year, not next year, but in 2016, we're going to have Scala 2.12. It's the next planned release of Scala. So just a very quick overview of what was new in Scala 2.8, in case you just have the first edition of the book. It had a huge number of bug fixes and improvement and impressive amount of new features. A redesigned collection library, so the infamous canned build prong, which was called Scala Suicide Note, comes from 2.8. We had name and default parameters, which allows us to have cop method for case class, which is very useful. Package object, next annotations, type specialization, so there was a talk yesterday about this, so we just started in 2.8. Java Converters was also discussed yesterday. And the RAPL was improved with tab completion and searchable history. And finally, we had Scala 2 with a new look and feel, and you could use a weak syntax for your comments. You do not need to use HTML tags anymore with Scala. And finally, binary compatibility between minor versions, so 2.8.0 would be binary compatible 2.8.1 and 2.8.2, but not necessarily 2.9, but it's something. At least they made this compromise, they asserted that between a major version, inside the major version, it will be binary compatible. So what is new in 2.9, 2.10, 2.11? And if it's not broken, 2.10 is going to fix it, because it was really big, really. First one, the late init and the app trade. So if you take any Scala book or blog tutorial, probably you're going to see something like this, object.hello extends application, and then you can just write anything you want there. There is no static, I do not even remember, that the signature is static, final. Who has used Java here in the last few months? I haven't, so I cannot even remember the signature for the main method anymore. But anyway, so you don't have to use it. Yeah, yeah, so you do not need to use that anymore, you just have application, but this has problems. It's not trade safe, and it's not optimized by most JVMs. So in 2.9 now we have app trade that replace the application trade, there really nothing to be changed here, but you do gain access to the command line arguments if you, there's args, vol that you can use, okay? So I'm talking that this is not optimized by the JVM, so let's see if it really does make a difference, you know how much of a difference it makes. I create this very simple application, I'm gonna sum all the numbers from one to four million. I'm gonna do it twice just to make sure that there's no, if there's some JVM optimization kicking in, we can have some look at that, so. And again, the same thing, but I'm using app instead of application, it's the only thing I changed here. So when I run the application, I get about seven seconds to sum all the numbers. But when I use the app, it's just seven millisecond. So it's three orders of magnitude faster, okay? It's a huge difference. Okay, so, but you may say, let me go back here. I'm using a for loop. And we know that for loops are not efficient in Scala. You know, there was this famous developer who was working for a startup that Microsoft bought and he published, not that he published, but it was leaked an email he sent to Typesafe where he listed a lot of problems with Scala. And one of the things he complained, it's that for loops are very inefficient. You should not use for loops, you should use while loops and just became very famous a few years ago. So let's see if there's any difference if I use a while loop instead of a for loop, okay? Again, I'm using application here and I just change it to app, no other change, then I'm gonna sum four million numbers. So those are the results I had for the for loop and now are the results I have for a while loop. So you can see that using the application trait, there's a pretty big difference from seven seconds to 40, 45 milliseconds. But if you look at the app trait, there's basically no difference at all. I mean, the difference that there is here is there's no statistical significance here. So this is one, another new feature in Scala is that range for each was optimizing to 10. So Paul Phillips, who committed this feature, he said that code like zero to 10 for each. Now it's as faster, maybe even faster than a while loop. So don't be afraid of using for loops anymore in Scala. They're just as efficient as the while loop, okay? Parallel collections. Parallel collections was the big feature in Scala 29. It was the headline feature. And it makes parallel programming really straightforward. You know, it's too easy. It's embarrassingly easy to use the parallel collections. They're efficient and transparent. You have two methods that were added to, not most, but many collections is dot par and dot sec. You just invoke dot par and you get back a parallel version of your sequential collection. And you use it in the same way. For some collections dot par is constant, but may not be the case always. So there may be a copy. But when it's possible, both the parallel and the sequential versions share the same data structure. And to convert from parallel to sequential, it's always a constant operation. And those are the collections you can use. Array, iterable, map, range, sequence, sets, freezing into 10 and vector, okay? But it's not, everything is not free. I mean, you have to be aware of how they work because the semantics are out of order execution so you do not have an order that's guaranteed. Which means that your functions may be applying to different elements in different orders. And side effects are no, no. They're gonna lead to two race conditions for sure. And if you cannot use no associative operations because they also will lead to non-determinism. So what I mean by that? Oh, sorry, but non-commutative operations, on the other hand, they are deterministic. So what I mean by that? Something that is associative and commutative, addition. One plus two plus three is the same as one plus two plus three which is the same as two plus one plus three. So it's both associative and commutative. And this works with parallel collections. It's three concatenations sometimes associative but it's not commutative. So A plus B plus C gives the same results as A plus B plus C. But it's not the same result as B plus A plus C. But even though it's non-commutative, this still works with parallel collections. What will not work is something that is not associative like subtraction. And it just so happened that subtraction is also non-commutative but this doesn't really matter. It doesn't work because it's not associative. So let's see if they really do help. So I have here a sequential application that will fill a vector with 50 million numbers, random numbers, and I'm gonna have a target and just to give some work here to the processor, I'm going to compute the square root of each of the 50 million numbers and see if any of the square root meet my target. This is the sequential version and this is the parallel version. Can you see the difference? Okay, just four keystrokes there, dot par. It's all that I changed. And how much did I gain from those four keystrokes? When I run the sequential version, it took about 400 milliseconds on my quad core machine. I put dot par and it goes down to 100 milliseconds. So I challenge any one of you here to do better with less keystrokes than that. You know, it's the most bang for the buck you can get. Four keystrokes and a four times speed up on your code. It's amazing. Generalized try catch finally. So in Q9, you can define and reuse exception handlers. So try, catch and finally they accept. Try and clean up. Try and finally accept any expression, any scale expression can be passed to try and finally. And you can create a partial function from throwable to T and a catch can just, you can just assign, put this function in the catch. So an example make it easier to understand here. So I define this handler here. There's a partial function and it would just print the exception, but I mean it could do something meaningful. And then I can try one divided by zero, catch default handler. No try A to int and catch default handler. So I can reuse that catch block over and over my code base. I can define standard handlers. So this is very nice, but unfortunately it's pretty useless because in 210L, we should not be using try catch anymore. We should be using the try monad, which is awesome, you know? It represents a computation that may result in an exception or may return a computed value. It's a monad, it's functional. Exceptions are not functional. The try monad is functional. It's used to perform operations without the need to do an explicit exception handling. You know the place of an exception might occur. You have just like you have option, you have non and some and either you have rule left and right. With try, you have success and failure. So they are somewhat similar. And it will catch only non-fatal exceptions system errors are still throw, which is the expected behavior. You know, is what you probably should do. Yes, I do not, but I don't believe there should be a huge performance difference because internally the try monad will have a try catch block. Okay, that there is no match behind the scene. It's just that you're encapsulating the try catch. So in terms of performance should be the same, but I didn't do an test there. So it's a monad. So you have map and flat map operations. And you also have recover and recover with. So recover and recover with map and flat map for failure. Map and flat map deal with success. Recover and recover with are the same thing for failure. You have filter, get around to option. So you have a lot of convenience methods and you can use this in your fall comparisons and everything. So let's see here how you would use that. I have a function div that takes two strings and it's going to return a try int. So first I try to convert the first string to an int. Then I try to convert the second string to an int. And finally I try, there's no try there. You see that it's just a division A divided by B. So what's gonna happen? If I call div A zero, I get a number format exception for input string A. This is the result of the four comprehension. If I try div one B, I also gonna get a number format exception for input string B. Finally, if I do div one zero, I get a different exception. I get an arithmetic exception division by zero. So yesterday we had a talk where they were showing that you should not be using options in this kind of situation because if you use an option, you're gonna get a none and you're gonna have no idea of what happened with the try, you know exactly what happened where. So it gives you a meaningful result in case of a failure. Just to illustrate here a few options. So I can call to option, yes? Well, isn't the same true for the option then? If you have some in a none then you compose them in different order. Okay. So it's not a monad, sorry. It looks like a monad, it almost behaves like a monad but it's not a monad, okay? So sorry about that. Yeah, so one thing you can do with your try is to convert to an option and then of course you're gonna have a, if it has success you're gonna have some and if it's a failure you're gonna have a none. You can call get or else, you know? It's very convenient so you can return a default result and there is a get and this is very important because try.get is actually pretty useful. It's not like option.get, option.get, everyone knows that you should never use option.get but try.get can be very useful, how? Well, if I call try.get on a success it's just gonna give me back my result, we are fine but if I call try.get on a failure it's gonna throw the exception that is inside the failure and how can this be useful, you ask me. Well, let's say that you have a code base and your methods have a signature, you know, they implement an interface and you cannot change that signature now because you would have to refactor your entire code base. So what do you do? When you're gonna write a new class, a new method you can write the class using try. You keep the signature, the met signature, you keep the return value and at the end of your method body you put try.get so it behaves just like your other methods would behave without try and if someday you can finally change the method signature you can refactor your code base, you just go to that method and you delete the last line instead of return try.get, you just return the try. So I think try.get can be very useful as a way for you to migrate the old code base to the new way of doing things without changing the method signatures. It's a step in between, okay? Value classes, implicit classes and extension methods. So implicit class just always introducing to 10 are more convenient syntax for defining extension methods. Implicit class have a primary construction with exactly one parameter, just one. And this is an example, I have an implicit class a, it takes an n, which is an integer and I can define as many methods as I want, x and y. This is the sugar into a class and I implicit method pair. So the compiler will convert this to a class a and I implicit that a, okay? That instantiates a new a. So it's just syntax sugar for the way we are used to define implicit conversions. Because of that, because there's a method, the same name, case class cannot be implicit because case class define a companion object with an apply method. So there would be a conflict there in the namespace. And here, a very simple example, I define an implicit class into ops and I find a method, stars, that will print a number of stars. And I call five dot stars and I'm gonna get better string with five stars. So nothing unusual here, okay? Value classes, they are used to avoid object allocation sometimes, not always. They give you the type safety of a Qsum data type but no runtime overhead. This can be very useful for things like temperatures. You don't want to define your temperatures as doubles. You want to define your temperature cells as a Fahrenheit. So you don't mix them. Physical dimensions also, you don't want them to be doubles. You want them to be something more meaningful like weight and height. Strings, you don't want to have a case class person, first name, last name, email, string, string, string. You know, you can put the arguments out of order. Or instead you have to use name parameters. First name equals, last name equals, email equals. Just define a value class for first name, for email and you're good, you know, your type safety. You're not gonna put the email in the first name place. The same is true for things like age should not be an int and so on. Okay, they take only a primary construct with exactly one vol parameter and they can only define methods. They cannot have vars, vol, laser vols or anything nested there, just methods. They may not override equals or hash code and they cannot be extended by other classes. Okay, so just an example, I have a case class age and it extends and vol. So what makes a class, a value class is that it will extend and vol. It doesn't need to be a case class. In this particular case, I made it a case class and this is how I use it, vol age equals age 18. At compile time, age is of type age but at runtime, it's an int, just an int. There's no object overhead here. And if I try to do age plus one, I'm gonna get the type mismatch error because this is not an int, okay. Yes, extension method. So extension method is when you combine value class and list methods and you're gonna have a location free extension method. So something very similar to what C sharp offers you. It's equivalent to having an object with a bunch of stat help methods. It's just a simple mechanical transformation performed by the compiler. There's no magic here, okay. So this is an example of an implicit class. So I have an implicit class, I define a method and when I call, are you okay? So when you call five dot stars, then implicit conversion kicks in and this is equivalent to have in the byte code a new int ops five dot stars. So this is the transformation that happens here with an implicit conversion. However, when I use an extension method and the only difference between this code and that code is that now I am extending and evolve is the same code. Just now I'm extending and evolve. When I do that and I call five dot stars, the byte code will be equivalent to have an object in ops with a method of stars and pay attention that the method of stars now takes an int. Okay, I don't have a constructor anymore. So before I had a constructor that would take a parameter. I don't have a constructor anymore. The method that didn't have a parameter before has a parameter now and the byte code would be equivalent of calling that object dot stars five. So it's just calling a static method, a lot more efficient than trying to create a new object and calling an instance method on that object. Okay, so this is what happens with extension methods. String interpolation. So string interpolation also new into 10. I told you, it would fix everything that was not broken. It's something that's quite common in the dynamic is scripting language. So you can just use dollar inside your strings and it will do interpolation for you. The difference is that you have to put an S first. The S is the S interpolator and it will do the interpolation for you, which is very common in almost every scripting language out there. You can put any expression you want so you can interpolate two plus two. Okay, some scripting languages also do that. And it works with triple codes. So you can have a moot line string and it's a shame that syntax highlighting is not working here, but, you know, hey, again, this is something that it's not uncommon. You know, I saw that in other scripting languages. So far, not anything really new here, but what's new, what's different in Scala? It's there at first. You have other interpolators. Oh, oh, sorry, sorry, sorry. If you wanted to escape a dollar sign, you have to put two dollar signs. So if you wanna put US dollars two cents, you put three dollar signs. The first two dollar signs escape one dollar sign. The second one is interpolation. Just a quick note here. So what's different from Scala than what you usually gonna find in different script language? For instance, you have a format interpolator that will format your string. So I can call matpy and pass a string format and it'll give me back 3.14. Again, not really awesome by itself. What is awesome is the F interpolators type safe, okay? If I pass a format like dollar percent G, I'm gonna get the type mismatch. You know, it wanted an int, I gave it a double and this happens at compile time. So you have type safe on strings at compile time, which I have never seen in any other language. Okay, I have never seen Haskell. If I had, I would know that the try is not a monad. Okay, okay, I do believe other language do this. Any what, which is even more awesome that you can define your own string interpolators. So not particularly, not anything in particular, but there are many libraries that define a SQL interpolator, for instance. And how is the SQL interpolator different from the ass interpolator that Scala gives me? Well, it will do, it's going to escape your interpolation. So there is no SQL injection attack. Okay, does it for you? Awesome. You have a trace on interpolator. Again, many libraries offer a JSON interpolator. Yesterday we had a talk that showed the JSON interpolator. And how it's just different from the ass interpolator. Well, it will take an object full and serialize the object full. It's not, we're not gonna call full to a string. It's going to serialize full to a JSON. So you can have nested objects. No? No, why are you saying no? No, no, no. Okay. So this is what happens. It's not just calling full.string. It's serializing full and put the JSON representation of full in your JSON object. So there are many, many situations where string interpolation can be very useful. Okay, futures and promised. They appeared to 10, then they were backported to 293. Futures away to perform many operations in parallel in an efficient and unblocking asynchronous way. It's a place holder for a result that does not yet exist, but which may become available at some time at some point in the future. It offers you some callbacks so you can, when the future does complete, it will call you, call some of your codebacks so you have one complete and success and failure. And those callbacks are going to be executed eventually. It doesn't mean that they are going to be executed when the future completes. And the order is not deterministic. They may be called in any order and actually they may be even called concurrently, you know, all at the same time. So if you have multiple callbacks, they may be fired in parallel, not in the order that they were defined. Futures can be combined and transformed with map, flat map, and now I'm really worried to say that future is a monad. I think it is. I'm gonna assume it is, you know, for me they are, I'm not gonna say it again. Yeah, yeah. Filter for each and then collect a lot of convenience methods for you to combine them and use them for comprehensions. So there's one thing I wanna show you about future. So I'm just gonna define this very simple function here that creates a future, you know, prints when it gets started, then goes to slip and prints when it ends. And just to have a return value, I'm going to return whatever I got as input. So I couldn't make anything more trivial. And this is one example of how you can use a future in a forecomprehension. Okay, you're gonna call your futures on by one inside the forecomprehensions and I can see some faces already. Don't worry guys, I know what I'm talking about. And there's nothing particularly wrong with this. Okay, this works, it's fine. Sometimes this is required. But the problem here is that if you do that, and it's very common when I do code review, it's very common to see this pattern, is that future B will not be created before feature A ends. Remember, I'm calling flat map here. So when I run this code, and you can run this a thousand times, you're gonna get this result a thousand times. You know, A started, A ended, B started, B ended, C started, C ended. So they are not executed in parallel. So you may be missing some here, so some improvements here. What you should do if you wanna compose features in a forecomprehension is, first you create your futures, okay, assign them to walls, and then you combine them. When you do this, the futures are going to be started in parallel. I run this quite a few times. Every time I ran this, I got different results. So I just took one. A started, C started, B started. So you can see that even so, I created the B future first, C got it started first in this particular run. You know, if you run this code multiple times, you're gonna get different results. This example is not really interesting because it's predictable how long each feature here is gonna take because they just slipped. So of course, it's always gonna be C, B, and A that they're gonna end in this order. But of course, in a real future that does some real work, the order in which they are going to end, it's totally unpredictable. Now, this is not how bad the first case. If you need a value that is returned from feature A to do feature B, this is the way to go. So let's say you're gonna call a web service, you know? You're gonna call a REST API and you need the return result of that REST API to make the second call. You need the value from the first call to make the second. In this case, feature B depends on feature A. This is the way to go. You cannot fire B in parallel before you have the result of A. So it's not totally useless. There are situations where this pattern, it's going to be used by pay attention if you can take advantage of firing the features in parallel. Do this. It's a little bit more verbose, but the benefits pay off. So a feature is a read-on placeholder for result which does not exist yet. A promise is a writable single assignment container which completes a feature. So it's a producer or consumer. You know, the future is the consumer, the promise is the producer. So here's an example of how we use a promise and features are far more common to find in code-based than promise. You create a promise and this promise gives you a feature. You know, as soon as the promise is created, the future is not completed yet, but if I call success on that promise, I am fulfilling the promise and the future gets completed. And one last observation here. When you're dealing with features, it's very common that you're going to end up with something like a sequence of futures. And it's kind of cumbersome for you to handle a future sequence. So you just call future.sequence on that result and it gives you back a future sequence which is a lot more useful for you to manipulate. This is very useful. I use it a lot. Okay, the dynamic trade was also introduced in 2010. It's just syntax sugar, okay? It's a simple mechanical transformation performed by the compiler. It's not, not, not any sort of dynamic type. It's not any sort of optional stats type. No, Scala is a strong statically type. There is nothing that even close resemble dynamic type in Scala. This is just syntax sugar. Remember that. And this is useful for some advanced DSLs and also when you need to interface with dynamic language like Brino, you know, the JavaScript runtime in Java. Or if you have some data formats like JSON. So if you can convert your JSON to a case class, please do it, you know. But if the JSON is not standard enough to fit into a case class, then you can use some of the dynamic trade to make it more convenient for you to manipulate that JSON. So what do you do? You extend dynamic, the dynamic trade, and you have to implement at least one of the following methods. Apply dynamic, apply dynamic name, select dynamic, and update dynamic. And here's some example of the transformations that happen, okay? If I call foo.par, the compiler will translate that to foo.selectDynamic in the string bar. If I call foo.bar equals 42, I'm gonna call updateDynamic, and I'm gonna pass bar in for two as parameters. So I'm not gonna go to the list here. Everything's pretty similar. So this is all that happens with the dynamic trade. And, okay. Feels like there's a slide, I'm sorry. Actors, in 210, Scala Actors are now the default actor libraries. The old Scala Actors are deprecreated. I really don't have time to talk about it here. Go see the details in the documentation if you're interested. Modularization. So some of the more advanced language features have to be explicit enabled now. How you do it, you call importLanguage.x where x may be one of dynamics, extensional, higher kinds, implicit conversions, post-fix-ops, reflective calls, and experimental macros. You can also use the command line. You can pass a dash language to enable a feature. You can put this in build.sbt. And to enable all features, you just call importLanguage.underscore or Scala C dash language underscore. This will enable all features. One thing to keep in mind that implicit conversions is only needed when you're defining new implicit conversions. You do not need to import it to use existing conversions, implicit conversions. And you do not need to import it if you wanna use implicit classes, okay? Reflection, macros, and post-codes. So this is in 210, this is marked experimental. And what's meant by experimental here is that they may change the API without deprecating it first. So don't be worried about the words experimental. Just mean that from 211 to 212, the API may change without going through a deprecation first. So with macros, Scala can finally throw runtime errors at compile time. Because this is what happens. You're gonna have programs that modify themselves at compile time. And it's very useful for code generation for very advanced DSLs. And besides compile times, which are called macros, you also have runtime reflection facilities now. And why you need runtime reflection? Why can't you use what Java already gives you? It's because Scala has some specific elements that are not available under the Java reflection API. You have functions, traits, generics, all sorts of things. So what this gives you, the new reflection API gives you, it's really five Scala expressions for you to evaluate the runtime. Quasi codes, which we are introducing 211, they make writing macros a lot, lot, lot simpler. It's much easier for you to manipulate the syntax trees now. So this is an example of a quasi code. It's just an string interpolator. And I give, I string that it's any Scala expression. So what this is going to do, it's going to parse this Scala expression and build a syntax tree for me from that expression. So it's interesting to notice that if I have a string full plus bar, it has the same structure, it generates the same, I mean, there's no surprise here, generates the same syntax tree as full dot plus parenthesis bar. So this is not just a string, it really is building a syntax tree here for us. And they can be decomposed via pattern matching. This is very powerful. And the example I'm going to give you is full plus bar. And I'm going to deconstruct that using pattern match one plus two times three. And I can see that full now is the same as one. Okay, not the literal one, but the syntax tree one. And bar, on the other hand, it's equal to two plus three. So at first, you could think that since I have plus bar and plus two, that bar would be equals to two. But it's not, it's not token by token. It's not the two, the literal two that's going to be assigned to bar. But the syntax tree on the other side of the plus sign that gets assigned to bar. So bar actually is two plus three, two times three, yes. So let me give a practical example of how you can use macros. Let's say that I have a logger, so this is pretty standard idiom. If logger is enabled, I'm gonna log a message. And this is a little verbose. I want something that is just log, we have a problem. So how do I write a macro that will take an expression like log and change it into the if statement that I need? So first I have to define this method log. It takes a string as input. And I just put a placeholder. I say that it's gonna be a macro and then I say which method actually implements the macro. In this case, I call it logint. So this is my method, it takes a context. And an expression string, it doesn't take a string. Take an expression string and it's going to with an expression unit, not unit. And it's just string interpolation that I do here. So that's it. It's just like generating HTML. It's just like an HTML template, but this is a Scala code. And with this macro here, I get that result. So very easy to do and very powerful. Okay, let me rush here. So the Play JSON API is an example for our macros are used. You call a macro JSON format and then you get to JSON and front JSON, serialization, deserialization. It's type safe. There's no runtime reflection. Everything has at compile time. There's no byte code enhancement. So very powerful. Another example is Scala Pickling which is also is a serialization framework. I'm gonna just pass here because of time. Case class with more than 22 parameters. I mean, what can I say? You should not have to need it. If you need it, I'm sorry, but okay, it's there. If you use this leak and you have a legacy database, maybe it's just gonna save your life. So new method in collections. I'm gonna really go fast here. This is not an exhaustive list. You have methods like in option, you have filter not flat, fold for all non-empty. Contains is very useful. 211 introduced option contains. I love it. Sack permutations, sack like combinations. You have no idea how much this has saved my life at job interviews. Okay. Just an example, it's not everything. SBT incremental compilation was introduced in 211. You have, okay, so Scala 211 compares 10 times faster. However, you're not gonna see any difference because they're using the cycles to mine by bitcoins. You have to use Scala 211 with SBT-132. You just add this line to your build.sbt. And what you get, so this is for my own projects. You know, I did some measures, actual measures and it's not some subjective. Compliation speed was improved from 25 to 80%. Actually, sorry, it should not be compilation. Re-compliation, okay, sorry. And what's better? The bigger the project, the better the improvement. 80% wasn't my biggest project, okay? So this is really very useful. Of course, your manage may vary. Pre-dev.triple question mark is just a placeholder for methods not yet implemented. This can be useful if you do test-driven development, for code samples on presentations, blogs. If you have to implement a class, you don't have time to implement other methods and you don't want to put in a placeholder like zero, empty string, put the triple codes, at least it'll be very clear that you have not implemented that method yet. If you try to call, you're gonna get an exception. Colors in the rubble, this is new in 211.4, very fresh feature. We just call Scala with this minus dscala.color switch. And wow, I mean, it's not full syntax coloring, but it does highlight the prompt, the variable names and the types for you. Nice. 212 and beyond, so what's in Scala Future? Java 8 support is the big ticket feature for Scala 212, okay? We're gonna have Java 8 style closer and lambdas using method handlers. We're gonna have support for Java streams and functional interface. We have interface annotation for traits to make sure that they do compile as a Java interface. And Jesus, so we can have bi-directional interoperability with Scala. You can call Scala lambdas Scala, sorry, you can call Java lambdas and Java can call your Scala functions and it's gonna be awesome. Which unfortunately means that Scala 212 is gonna be just Scala 8 only. It's not gonna be compatible with Scala Java 7 or 6 anymore. So, Scala improvement process, improved lazy vowel initializations. So for those of you who don't know lazy vowels are the sausage of Scala, okay? You don't want to know how they're made. They totally lose the awesomeness. Sports, it's closest for concurrent and distributed environments. Async and await, those are simplified syntax for futures, you know, just to make futures composition easier. There will be some cleanup and simplification in the collections library. The Scala method, it's to make writing macros even easier, you know, even simpler. So, very awesome work that's being done there. We're gonna have a compile-based code start checker, so we had an awesome presentation yesterday that talked about that. Scala 212 is gonna have something built in using the compiler to enforce. And there are some deprecations, the procedure syntax, you know, when you have that A brace, it's gonna be deprecated. You're gonna have to use the much more of your bow's equal sign there. So you're gonna have to put an equal sign between the A and the brace. This is a source of confusion. I'm glad that they are removing it. XML literals, who here use lift? Don't be shy. Okay, one. You're screwed. I'm sorry. Okay, XML literals, they'll be gone. Now you're gonna have to use string interpolation for XML. Scala swing and scala continuations also, they are deprecated. Scala JS, you know, we're gonna have a talk today about Scala JS. So awesome thing that is in Scala Future. And finally, we have some Scala compiler forks. Dot from EPFL. There's a compiler fork from type level. There's a compiler fork for Po-Phillips. I didn't think that there was gonna be Po-Phillips. I didn't check Hacker's news this morning. I don't know if a new compiler fork was announced this morning, but I like to remind of Oppenheimer, very wise man who said that the Nobel Prize in physics should go to the physicist who does not fork the Scala compiler just here. Okay, very, very wise guy, just Oppenheimer. Okay, reference, if you wanna have access to this presentation, if there's something here that's useful for you, you do not need to worry about the address. Go to the conference webpage. My GitHub is my first name, my last name, no dots, no anything, and it's gonna be very useful to find it there. You can also go to the blog of my company. They ask me to say that we are hiring. This is everywhere that I just copy and paste all the information in this presentation. And that's it, so thank you so much and do we have time for questions? Thank you.