 My name is Marconi and I'm going to talk about what's new in Scala, what's new since programming in Scala. So you may know the Programming in Scala book by Martin Odorski and Lex Poon and Bill Vanners. It is the definitive guide to Scala, I believe it's the book that most of us have used to learn Scala. The problem is that the book is quite old actually, the latest edition was released in December 2010 over four years ago, four years and a half ago, and it only covers up to Scala 28. So I talk to Bill Vanners every time I meet him at the conference, he always jokes with me that they are working on the book, on the new edition, they just don't have enough time to finish. So and a lot of things happened in Scala since then, you know, we had 2009, 2010 was a pretty big release and 2011, so let's have a look at what have changed. So just a quick overview of Scala timeline, Scala was created in 2003, so the first public release were released in 2003, 08 and 09, 2004 we had a 1-0, then 2005, 1-4, 2006 Scala 2.0 was released and this was a very important release because the Scala compiler for the first time was written in Scala itself, so it was self-hosting compiler. As we know this is a very important milestone for a programming language. 2007, a couple of new releases and Lyft, the web framework was released in 2007, so to the best of my knowledge is one of the oldest widespread big Scala projects in the wide Lyft framework. 2008 we have 2007 and then in 2010, for the first time 2009, I didn't have a Scala release, so 2010 we had Scala 2.8 which is a pretty big release, one of the biggest, it was almost called Scala 3 and the only reason they didn't call it Scala 3 is just because the name 2.8 was already out there and people were referring to the releases 2.8, so that was the only reason it was not called Scala 3, this was a very important release. Today 1.1, the web framework, so it got support for Scala through a plug-in, Play was written in Java back then, but they had a plug-in that you could use to write Play applications with Scala and ACA was released also in 2010. In 2011 we had 2.9 and TypeSafe, the company was created and 2012 Play 2 was released and this time it was written in Scala, so the equation reversed, Java was the second class, it's in the Play framework was native Scala. 2013 we had Scala 2.10 which was a very big release too, I believe as big as if not bigger as 2.8. Last year Scala 2.11 was released and next year there are plans to release Scala 2.12 which is going to bring a lot of fundamental changes to the, not so much to the language itself but to the compiler infrastructure. So a quick overview of what was new in Scala 2.8 since it was a very big important release. It had a huge number of bug fixes and an impressive amount of new features. With ZONE Collection Library they can build from, you know the infamous can build from was introduced, name and default parameters, very important features, so now case class could have a copy method, package objects, nested annotations, type specialization for box primitives and Java converters, we have in Scala Java conversions which is automatic implicit conversions and Java converters is explicit implicit conversions so you have actually to call as Scala or as Java, it doesn't happen automatically so you have more control when you're converting the collections. And Revamped Rappel, you know better tab completion and sociable history and Scala doc 2 was introduced with a new look and feel and you could write your Scala doc using a weak like syntax much simpler than using HTML as in Java doc. And for the first time in Scala history they have guaranteed binary compatibility for minor revision so 2.8.1 would be compatible with 2.8.0, 2.8.2, 2.8.3 but not 2.9. It's something, it's not ideal but it's something, okay. So what is new now in Scala 2.9, 2.10 and 2.11? So as I said, 2.10 was a pretty big release, if something's not broken, 2.10 we just gonna fix it. First thing is the late in it and the app trade. So we see the Scala book and in many old tutorials we see something like object low extends application and then you can just call print line hello word, there is no need to write the public static void main string args that you have in Java. The problem is that this is not thread safe, it's not optimized by the JVM the way it was implemented because this actually is the constructor. The bottom of this object is the constructor so it bring a few problems. So now we have the app trade. Doesn't change anything, instead of writing extends application just write extends app. One thing that it adds is that the common line arguments are now accessible via the args. So let's, if it really does matter, I have here just a simple application that will sum all the numbers from 1 to 4 million. So this is you extending the old application. And here I didn't change anything, I just extend app instead of extending application. So let's see how fast they run. So when I'm using application it takes about 7 seconds to sum the 4 million numbers. When I use app it takes 7 milliseconds. So 3 orders of magnitude, a thousand time faster. I said, wow, there must be something wrong here. You know, maybe the JVM is doing something funny with the four loops or optimizing some things away. So I decide to rewrite using a while loop now. Okay, let's see if it makes a difference with a while loop. Same thing, I just replace the four loop with a while loop. And so this is the result I had for the four loop. And when I use the while, you can see that it drops a lot. From 7 seconds to 45 milliseconds, that's good. And we know why that is, because the four loop is not actually a four loop. You know, you're distributing that. And actually, you were creating a lambda there. And you're calling that lambda, so this has a huge overhead. And as we can see for the app trade, it didn't change the times. But for the application, it did change. Which brings us to the next improvement in Scala to tend that's range for each optimization. So mix code like 0 to 100 for each, as fast as often faster than a while loop. So when you use a four I, one, two, and you used to pay a performance penalty there, that's gone. So you do not lose anything by using a four loop instead of a while loop now. So this is really, really good improvement. Parallel collections, it's an effort to facilitate parallel programming by its prior users from low-level parallelization details, while providing them with a familiar and simple high-level abstraction. So this is really, really cool feature Scala 2.9, because it's efficient and it's transparent. The only thing you need to do, if you have a collection, you call the dot par method and you keep using the collection as if it were a sequential collection. And if you have a parallel collection and you want to have back sequential collection, just call dot sec. That's it. That's all there is to use the parallel collection. Okay, so just call the method. Depending on the collection, par may be a constant time operation. Okay, for some collections, there will be a copy. And sec is always constant, so when you have a parallel collection and you want to convert it back to sequential, you do not pay any price for that. And the collections there are supported are ray, iterable, map, range, sec, set for 2, 10, 3, and vector. And one thing to keep in mind though is that the parallel collections, they are concurrent and they have out of order semantics. What that means that the order in which the functions are applied, it's arbitrary. I mean, it makes sense. It's parallel. It's not sequential. And side effects are prone to raise condition. Of course, we are Scala developers. We do not use side effects, so we do not need to worry about it, okay? Right? And side effects and non-associative operations can lead to non-determinism. So what is meant by that? Okay, non-commutative operations, however, are determinist. So what do I mean here? Just a quick recall. Something that is associative and commutative is an operation like addition. It doesn't matter if I do one plus two plus three, or if I do one plus two plus three, or if I do two plus one plus three, it doesn't matter. The result is always the same. So something like addition works with parallel collection. Something that is associative but is not commutative is string concatenation. I can do A plus B plus C is the same result as doing A plus B plus C. But it's not the same as doing B plus A plus C. So even though this one is non-commutative, it still works with parallel collections because it is associative. Now, something that is not associative is subtraction. So one minus two minus three is not the same as one minus two minus three. And of course, it's not the same as two minus one minus three. So this doesn't work. Now, it just so happened that subtraction is also non-commutative, but it doesn't matter because it is not associative. You cannot do subtraction in parallel on your collections. So let's see a quick example here. What I'm doing here, I have a vector. I'm going to fill this vector with 50 million random numbers. And then just to make things to give some work for the JVM, I will see if the square of those random numbers meet the target. I want to see how many of those numbers meet the target. And now I have my parallel version. Can you spot the difference? Let's go back and forth, back and forth. So there is just that dot par when I'm filling my vector. It's all there is to the only change I made. So here are the results. When I run my sequential collection, it takes about 400 milliseconds. And when I run the parallel collection, it takes about 100 milliseconds, which makes sense because this machine is a quad-core processor. So it is using the four cores. And we have a four times the speed up here by just adding dot par. So this is pretty awesome. I don't think it can get any easier than that. Now generalize try, catch, finally block. So it's reusable exception handling. So we have something like try and a body. And then we have a catch and a handler for group of exceptions. And finally, some cleanup code. So what that means is that body and cleanup can be n expressions. And handler, it's a partial function. So what it means is that now I can define a handler for exception. So I have here my default handler. It's a partial function from throwable to a unit. And then I can put anything I want there. And I have this wall now. And I can use this wall with my catch. I can reuse it. I do not need to write that block over and over again in my code. I can define it only once and reusing that exception catcher. So here I have an example. I try to divide one by zero. And I use the default handler. I do not have to write the catch block there. I try to convert a to int, string to int. And again, I can reuse that same block that I defined up there. So this is pretty awesome. It's a pretty awesome feature. You can reuse some code where usually you cannot do it in Java. But unfortunately, it's pretty useless. Because there are no exceptions. You should not be using exceptions in your code. Exceptions are not functional. A function that throws an exception is not a total function. It does not return a value for all possible inputs. So a function that throws an exception is not defined for all possible inputs. And as functional programmers, we know that this is the core of functional programming is to have total functions that are defined for all the inputs. So do not try to catch exceptions. Try to catch exceptions. So what is try? Try was introduced in 210. It came from Twitter. Try represents a computation that may result in an exception. Oh, return successfully. So try does for exceptions what option does for now. So I bet everyone here knows how to use options. And they use options everywhere. And no one here is using now anymore in their Scala code. And by the same reason, you should not be using now. You should also not be using exceptions. You should use try instead. You can perform operations without the need to do explicit exception handling in other place where an exception might occur. Just like option, you have some. With a try, you have a success and a failure. And only non-fatal exceptions are caught. System errors are still thrown. So this is actually essential for the proper working of Scala. Some exceptions, they should not be covered. They should be thrown. And try to take care of that for you. Also, try supports operations like map, flat map, recover, recover with, which are just like map and flat map, but for failure instead of success. Filter, get through else to option. So they can be using for comprehensions. They behave like monads. When I was presenting this talk a few months ago in Portland, I said that try were monads. And someone raised their hand and said, no, they are not monads, because they are not associative. OK, so they are, try is not a monad, but it does behave like a monad and can be used in for comprehension. They have the same monad interface. So here is an example of how you can use them. I have a method that instead of returning an int, it's going to return a try int. It takes two strings. Try to convert these strings to integers and then returns the one number divided by the second one. And if I try to divide a by 0, I'm going to get a number for a method exception for input string a. If I try to divide 1 by b, I get something similar. And if I try to divide 1 by 0, I get a arithmetic exception division by 0. So you can see that it's nice, because you still get back the exact error message that happened. You can still pinpoint what the problem was, but you have a much better and simplified interface. You do not need to surround everything with try catch, finally blocks. So a few examples of the higher order functions that you can use. You can convert a try to option. That's pretty awesome, pretty useful. You can use get or else. So you don't care about the exception. If an exception occurred, just going to return a default value, you can do that. And here is a method that I really like, get. So usually when we think about option get, oh, no, option get is forbidden, because option get can throw a new point exception. But try.get is actually pretty useful. Let's say you have some code base. And it's a node code base. The method signatures are already returning ints and strings and whatever. And you cannot, at that very moment, change your method signatures to return a try. You cannot do that refactoring. It's too big a refactoring. But what do you do? You write your code using try all the way. And at the end, instead of returning that try, you return try.get. So if an exception happened, you're going to throw an exception as you would. But your code now is using try. So when it gets to the point where you can change the method signature to return a try, you just go to the last line and delete the dot get. And your method is ready for the new architectures, or for the new interface. So this is really useful. It's a stopgap solution. So you can make a smooth transition from exceptions to try, use try.get. Value class, implicit class, and extension methods. All of those were introduced in 210. Implicit classes is just a convenient syntax for define extension methods. Implicit class have a primary constructor with exactly one parameter. So this is an example. I have an implicit class A that takes an integer. And I can define methods. This will be distributed into a class in an implicit method pairing. So it's just like the way we used to do implicit conversions before. I still have my class A. And I implicit method to do the implicit conversion. So it's just syntax sugar to simplify the creation of implicit conversions. And because I define a method with the same name as the class, I cannot have case class and implicit case class because we know case class will define a companion object with the same name. So there will be a conflict there. You cannot have both. A simple example here. Trivial, I create an implicit class, int op. So I'm going to put all my extension methods to introduce here. I define one that it's star. We'll just print a number of stars. So if I call five stars, I get a string with five stars. Very simple. Value classes. This one, it's a lot more interesting. So value classes are used to avoid object allocation. Conditions applies. It's not always. You can check the documentation to see where you cannot avoid object allocation. They bring you the type safety of custom data types without the runtime overhead. So this is really awesome. So let's say this is good for you instead of declaring, using primitive types. And by primitive here, I do not mean JVM primitives. But base types like integer, boolean, of course, but also strings and dates. Instead of using those types, those bear types, you can define types like Celsius and Fahrenheit for temperature instead of double. Weight and height instead of double, again. First name, email, instead of string. Age instead of int. So you can define a type, but you do not pay for that type. At runtime, they will be converted between their strings. But at compile time, you cannot mix them. So it gives you additional type safety. So you can only have a primary construct with exactly one vol parameter. You can only have methods in your value class. You cannot have anything else. You may not define equals or hash code. And it cannot be extended by another class. So here, as an example, you can use case class. So I have a case class H that encapsulates an integer. And the way you say that it's a value class, you add extents and a vol. And now I have vol age equals age 18. At compile time, age is of type age. But at runtime is of type n. So if I try in my code to do something like age plus 1, it's going to give me a type mismatch. It's going to protect me that I do a stupid thing with my types. Extension methods. So extension methods, it's when you combine both value class and in-place class. So you get allocation-free extension methods. So this is equivalent to using an object with static help methods. It's just a simple mechanical transformation performed by the compiler. There is no magic here. So an example, I defined an in-place class before. And this is equivalent when I call five dot stars. This is equivalent of creating a new instance of my intops class and then calling the method star on that instance. So this is what we had before. With extension methods, what I get when I call five dot stars is like I had an object with methods that take an integer as parameter. And then I'm calling a static method on that object passing the parameter. It doesn't get much faster than that on the JVM. It's a static call, invoke static. It's just as fast as it gets. So it's a lot more efficient. You do not have an object allocation. Just to call a method on that object and then throw the object away. It doesn't need to be garbage collected. So it's a lot more efficient. String interpolation. String interpolation, I was introduced in Scala 210. It's also a very awesome feature. So I just prepend an S to my string and then I can use the dollar sign to interpolate values there. And it supports an expression. It doesn't need to be just variables. I can do computations there as well. And worked with triple codes. And it's really a shame that you do not have syntax highlighting there. But you can create as long as a block of code text you want and interpolate as many strings as you want. Now you come to me and say, oh, good, fine, cool. But I've been using Ruby and Python and they have this for aids. What's the big deal about it? Well, the nice thing is that in Scala, we do have a few features on string interpolation that I have never seen in any other language, especially the dynamic ones. Oh, sorry, I missed one. So if you need to escape the dollar sign, just use double dollar sign. Yes, so here I have the first two dollar signs is to produce a literal dollar sign and then I'm interpolating the A. That's why I have three dollar signs. So those are the features that we have in Scala that no other dynamic language is going to give to you. So formatted strings. So I can just pass some formatting string. Instead of using the S interpolator, I'm going to use the F for formatting. And it will format my strings for me. And this is the really cool thing. The F interpolator is type safe. Not even C or Java have that. So if I call math by dollar D, I'm going to get a type mismatch because the percent D is for integers and I'm giving it a double. So this happens at compile time. You have type safe strings. This is awesome. And of course, you can create your own interpolators. You do not need to be limited to the interpolators that the language provides to you. So there are just about 250 frameworks and libraries for Scala that give you a SQL interpolator for SQL queries. And how does a SQL interpolate is different from, say, the regular string interpolate. The difference is that when you interpolate a variable, that variable will be escaped. So you prevent SQL injection. A JSON interpolator, if there is 250 for SQL, there should be about 500 for JSON. And again, so what's the difference from the regular interpolator is that it may recursively serialize your interpolation. So let's say that for is an array. It will not call array to string. It will not put some garbage there. It will recursively convert the full array to a JSON representation and interpolate the JSON representation there. So it's a very powerful mechanism that you have at your disposal, very flexible. I don't know of any other language that do the same, the same level that Scala is doing. Futures and Promise, those are introduced in 2010 and then were back ported to 293. Also, they come from Twitter. Futures are way to perform many operations in parallel in an efficient and non-blocking, a synchronous way. Future is a placeholder for a result that does not yet exist, but which may become available at some point. It has a lot of callbacks like uncomplete and success and failure that will be executed eventually. This is important to know. The order in which the callbacks are executed is not determinist. And callback may not be called sequentially, but executed concurrently at the same time. And they will not necessarily happen after the future completes. They will be eventually, you don't know when. And future, again, just like tries, futures are no associative, so they are strictly speaking not monads, but they do behave like monads. They follow the monad interface. They can be used in for comprehension and they can be combined using a lot of utility methods. So I'm gonna give you an example here. I have something very simple. I'm gonna take a string and a timing interval. Then I'm gonna print a string just to say that I get started, then my thread gets to a slip, and it wakes up again, okay, and returns the tag. And here, it's how you may see some code using futures combined, the futures in a for comprehension. I fire one feature, two features, three features, and then I group together the results. But there is a problem here. I see some faces, some ugly faces here. So yeah, I know there's a problem here, is that if you do that, the futures are going to be executed in parallel because I'm using here for comprehension. So I'm calling feature A flat map feature, feature B map, feature C. So feature B doesn't get started till after feature A completes and the same for C. So this is not the best way for you to combine features. So the way you should combine features, you start them outside the for comprehension, so they get started now in parallel, they get executed in parallel, and then you combine the results in a for comprehension. So when I do that, if you run this two, three, four, five times, you're gonna get two, three, four, five different results because this is non-determinist. One run that I did, it started A, C and B, so you can see that C started before B. This is real, you know, I didn't make that. And C ended before B, which ended before A, so it doesn't happen in any predictable order. And while on the first example, you can run it a thousand times, a thousand times, you're gonna get always the same result. So promises are features of read, only place of road, holder for a result which does not yet exist. A promise is a writable, single assignment container which completes a feature. So one is the reciprocal of the other, you use a promise to fulfill a future. Here is an example of how you could use it, you create a promise, and then from that promise you get a future and we can assert that the future is not completed. And then I do some computation in parallel and then I fulfill the promise. I call the success method there and now my future is completed. Usually you're not gonna need to worry about promise so much unless you're writing some kind of framework or library code. Usually you're gonna use just the futures, not the promises. One very useful method is future sequence which converts a sequence of futures to a future sequence. So it's gonna be very common for you when you're doing some real world computations that you're gonna end up with a sequence of futures and that's not what you want, you want a future sequence. Just call this method, it does the conversion for you. Dynamic trait introduced in Scala 210 is just syntax sugar. It's a simple mechanical transformation performed by the compiler. There's no magic here. It's not any sort of dynamic type, so I've seen people say that Scala now has dynamic type, that Scala has optional static type. No, no, no, no, nothing like that. It's just syntax sugar. And the main use case is to enable flexible DSLs. And especially when you want to interface with dynamic language and data formats like JSON. What you have is you extend the dynamic trait and you implement at least one of the following methods. Apply dynamic, apply dynamic, name it. Select dynamic and update dynamic. And then the compiler will perform the following transformations. So if I have foo.bar in my code then and my class foo does not have a bar, the compiler will change that to foo.selectDynamicBar. So now your selectDynamic method can handle that as you want. And similarly for everything else, so you see that you can have like access syntax, foo.bar, you can have assignment syntax, foo.bar equals something. You can have an array syntax, no foo.bar zero. So I'm accessing the zero element of that array and I cannot make assignment to that as well. I can have things that look like they are method calls. So you have a very flexible interface here that you can use. And this again, this is most useful for DSLs. ACCA Actors. So in 210, the default Scala Actor library was duplicated. Now you should use ACCA. There's too much stuff that I cannot cover here. I'm gonna refer you to the documentation if you wanna know the details for the migration. Modularization. So now some of the more advanced language features have to be split enabled. So in the way you do that, you just import language.x where x is one of those options, dynamic, existential, higher kinds, implicit conversions, post-fix operations, reflexive calls and macros. You can also enable then when you call the compiler, of course you can add that to your SBT, build SBT file. And you can pass a wild card so you enable everything. And one thing that I want to call attention here, implicit conversion, it's only needed when you're defining new implicit conversions. It's not needed to use implicit conversion that it's already defined. And it's not needed to define implicit classes. So I showed you how to use implicit class. You do not need to import implicit conversions for that. Reflection, macros and cross codes. So just in 210, it's still flagged as experimental. And what is meant by experimental here is that the interface may change from one release to another without duplicating it first. So your code, if you're using that, your code may break without warning. It doesn't mean that it's not stable yet. So with macros, Scala can finally throw runtime errors at compile time. So just useful for metaprogramming, programs that modify themselves at compile time. Very useful for code generation and for advanced DSLs. You have reflection that can happen both at compile time when they're called macros and also at runtime. And the reason why we need special runtime support that we cannot just use the Java facilities is that there are many Scala-specific elements that are just not available under the Scala, under the Java reflection API. And just give those reified Scala expressions. So very, very useful as we know from the JVM. Quasi-codes are significantly simplified notation to manipulate Scala syntax trees with easy. Quasi-codes are awesome, awesome, awesome. So what it is, I have a string interpolator. Look at the power of string interpolators here again. And I call it on an expression full plus bar. And now what it does, it parse that full plus bar as a Scala code and builds an abstract syntax tree for me for that code. So I can do something like, I can assert that full plus bar has the same structure, has an equal structure as the AST for full dot plus bar. So you just see that, doesn't matter which syntax you use, you're gonna get the same syntax tree here. Now, how it gets really, really cool is that they can be used to decompose with a pattern matching. So if I do something like a dollar full plus dollar bar and declare variable using pattern matching to decompose an expression like one plus two times three, what I get now is that I actually have created two variables here, one variable called full and one variable called bar. And the variable full has abstract syntax tree that it's equal to the one, it's just a single two. No, not very important. But bar, look at bar, bar, it doesn't match token by token. This is what I want to show here. So full match one plus match plus, but bar matched two times three. So I can see that bar has the same structure as two times three. Again, just to show that it's not a matter of how you write it, I can call it use any syntax I want any representation I want, they give me the same structure. So this is really powerful so that you can decompose call expressions and assign different parts of your syntax tree to different variables. I'm gonna give you an example of how you can use this in practice. Let's say that you have a logger, you want to use a logger and we know that best practice says that we should first test if the logger is enabled before we make the logger call. So we would have something like that. But this is two variables and repetitive. What I want is something like log, we have a problem, I just wanted to have a simple expression. So how can we use macros to produce that? So the first thing I do, I define my method log and it's gonna take a string as parameter, it's gonna return unit in this particular case and I say that that is a macro, okay? And log is the name of the macro that's going to implement that macro. So here's my log method, it takes a context and a few things here. But the important thing is the string interpolators. So what I do is, I just write my scalar code and put my variables that need to be interpolated there. It's just like writing HTML templates in any web framework. You're writing scalar code as easily as, you're generating scalar code as easily as if you were writing an HTML template. That's really awesome and very powerful and very easy to use to generate scalar code. Time is running out so very quickly here. Some examples where they use macros, it's the play JSON API, use macros for JSON serialization and deserialization. It's type safe, there's no runtime reflection and no byte code enhancement. Everything happens at compile time and it's color picking, it's a different serialization framework, very similar. Case class with more than 22 parameters. So you should not have to need a case class with more than 22 parameters, but if you do need it, your day has come. Scala 211 does support case class with more parameters than it's reasonable to think about. And I could not really imagine and use it for example to put here. So new method in collection. So this is by enemies on an exhaustive list, but we have things like span and iterate or we have a lot of new methods in the option class. If you're gonna stay for my next talk in five minutes, I'm gonna talk about that. Sequence permutations, sequence like combinations. You have no idea how those two methods saved me at many job interviews, you know? So it's very useful for job interview when people ask you to write permutations and combinations of lists. Now it's quite easy. Okay, SBT incremental completion. This was introduced in Scala 211, okay? So now Scala compiles 10 times faster, but do not expect your build times to change because they're using the saved cycles to mine byte bitcoins. So you need to have SBT 032 and you add this single line to your build that SBT. I did that for some projects, okay? So those numbers are real. I saw recompilation, not compilation, this is wrong, sorry, recompilation. So when you made a change to your code base and you have to recompile some files, there were speed improvements from 25 to 80%. Now the really good thing is that the bigger the project, the better the improvement. The 80% came from the biggest project. So where it matters the most. So it's not insignificant, you know? 80% it's a lot. Of course, your mileage may vary. Pre-daft, triple question mark. This is a place, hold it for methods that have not yet been implemented. It's very useful for if you're doing test-driven development for code samples and presentations, blogs, you know? If you need to implement an abstract method, so let's say you need to implement a class and an interface, a trait, and it has a lot of abstract methods and you don't care about those methods at that time, you just put triple question marks and you're good to go. You can revisit it later. So you just do that and of course, if you try to call, it's gonna throw an exception. It sounds silly, but actually it's more useful than you think, you know? I've used it a couple of times. Rappel colors, so since color 214, now the Rappel use colors. You just pass this flag and you can see some colors in your Rappel. It's not full syntax highlighting, but it's cool, you know, it helps. Scala 212 and beyond. So what's in Scala future? If, I see some Scala days, two shirts over here. So if you went to Scala days in San Francisco, Martino D'Ars gave a really good talk, a keynote about the new features in Scala 212, some of the things they're planning to do, they have a lot of change to the language, to simplify, to unify concepts and change the type system. One of the things that in 212, they're gonna have full support for Java 8, they're going to use method handers and the lambdas, so Scala closures are going to compile to Java lambda so you can interrupt back and forth, okay? So Java 8 style closures and lambda using method handles. It's also going to support Java streams and functional interface. The sambdas that we have in Java 8 now, you can implement those in Scala and Interface straight, and it's bidirectional. So you can both from Java call Scala and from Scala call Java, which means that Scala 212 is gonna be Java 8 only, which is not a bad thing because Java 7 is end-of-line already, so you should be using Java 8 anyway. CIP20 improved laser valve initialization, so laser valve are the sausage of Scala. They look delicious until you learn how they're made. So if you don't know laser valves, they are not as perfect as they seem. There are a lot of little details that you should worry, performance details and also laser valves can deadlock even if one laser valve does not refer to another laser valve in vice-versa, even if you do not have a circular dependency, they still can deadlock. So they are not as funny as they look. So they have some things in the pipeline to improve that. Spore, it's some improvements for closures, for concurrent and distributed environment. Async and await. So those are macros that make using features even simpler. So you do not need to use four comprehensions like I did before. You can write your code using features almost like it were a sequential computation. So your code doesn't look any different. You do not have the callback hell that people say that you have to be passing features, features. So you can write your code that looks like sequential code but it's using features and a macro does all the nice work for you. Some simplification and cleanup in the collection library. Scala meta, even more advanced features for reflection and macros. We're gonna have a code study checker that use the compiler, so very useful. You can enable some warnings there. And of course some duplications procedure syntax. So you cannot leave the equal sign behind anymore. So now the equal sign is required when you create a method. XML literals there, they will be gone. You're gonna have to use a string interpolations for XML literals now. And some last use package, they will see the end of the day. Scala.js, it's the Scala ported to JavaScript. Very awesome project. I recommend you have a look. And of course we have some Scala compiler forks now. From, there's the dot language which is a project Martin O'Darst is working on to simplify Scala, maybe that will be Scala 3.0 someday. Type level fork, the Scala compiler. Paul Phillips fork, the Scala compiler. Which led, you know, Oppenheimer to say that the Nobel Prize in physics this year should go to the physicist that does not fork the Scala compiler. Okay, just too many forks happening. Those are the reference, you can see these slides on my GitHub, there are many articles in our blogs about some of the stuff here. All the release notes and all the documentation that you need, so if you have these slides, they're clickable links. And that's it, thank you everyone. I'm sorry, we were late. I don't think we have time for questions. Thank you.