 So thanks everyone for tuning in. I'm here with the Dean. I've been actually trying to get Dean for many years now to to function conference and other conferences I ran and it's real honor to have him amongst us, you know, I've actually my first recollection of dean is reading the chapter in clean code book, which he had contributed to along with Uncle Bob's book. And then onwards I think I have had the opportunity or privilege to run into him in a few few cases. And then of course when the scholar book came out, which I think is 2008 2009 time frame. You know, I was just telling him that, you know, Venkat Subramaniam and he both had the book out at the same time so you know it was kind of hard moment to see, because I'm not good at reading books with which one to go. Actually, now you're up to the third edition of the book so that's that's very impressive and congratulations on that and thanks for all the contributions you've made to the functional programming community in general but scholar particularly I think you've put in a lot of sweat behind. And of course these days, Dean is working with IBM and doing a lot of interesting work I think he's probably touched upon some of these over to you Dean thanks again for coming in. Thanks, it's great to be here it's it's actually a privilege to be here, as you said it's been a long time and we've talked about it and it was great to catch up to really enjoyed finding out what you're up to as well. So let's go ahead and get set up here. Alright, I think I have everything, and I will try to answer questions as they come up I have a Q&A window open. So my name is Dean Wampler. There's a couple links here if you want to follow me on Twitter. I still I've been blogging about scholar three, since I started, since about the time the book came out. This is the third edition so you can just find me at Dean Wampler.medium.com, or send me email it. Yeah, my email address. Excuse me. All right. Hello and the photos. This is a trip around Nevada that I did actually middle of last year during one of the loss and COVID, some backpacking and driving around so interesting place if you ever get a chance to go there. So yeah, I actually worked for IBM research I joined about two months ago and the reason I joined was to lead an engineering team that is trying to promote what we're calling accelerated discovery as in scientific discovery. So IBM has a bunch of science assets and research, a lot of things about like modeling chemistry, quantum computing turns out to be a very good system for simulating chemical interactions these days and that's actually become a viable use of it even though it's still a very early technology. So the platform is really kind of doing the stuff that most of us do which is build production grade software, you know on Kubernetes, the whole nine yards, but to make it much easier for research scientists, and even the people with an IBM to access and use and sequence these scientific toolkits for drug discovery, carbon capture discovery, that kind of stuff for climate mitigation and other materials research just a whole bunch of things that are kind of fun to play with and that's, that's kind of what got me really interested in doing this was working on a mission like this of building stuff. I'm actually hiring. I don't have a good link for the, our open recs at the moment but definitely follow up if you're interested in finding out more about about the team. As the rest said I published the third edition of programming Scala last year, almost a complete rewrite really so go out and buy a new copy. I basically updated it significantly for Scala three. And as I'm going to discuss I really like a lot of what they put into Scala three they fixed a lot of little warts. Interesting decisions they made that have actually come to like, I hope it's a very successful edition of the language but it is kind of a major update. Although I have to say, it's done a pretty good job with backwards compatibility you can use 213 libraries, you can actually use Scala three compiled libraries in the latest release of Scala 213. So they worked really hard for backwards compatibility but there are some things that have changed. So you know if you decide to upgrade you will have to take a little time at least to fix a few things that have changed in the language. So basically what I want to talk about is first how Scala has evolved. In particular how Scala three provides greater clarity about certain constructs, how they've rethought the way implicit to work this is like the power mechanism and Scala that everyone talks about either for good or for ill. It's like anything it can be misused. And one of the things they've tried to do is move away from having to memorize idiosyncratic uses of this underlying mechanism to provide contextual abstractions that are more fit for purpose you know that are more directly obvious about what they're doing, and less about learning the sort of idiosyncratic idioms for doing things like adding methods dynamically to types and so forth. And it's to the type system that I'll talk about briefly as well that are kind of interesting. I also want to talk about you know lessons learned from 15 years is you know what I've observed about people building software in Scala versus Java or whatever. I kind of put enterprise Scala in quotes because I don't want us to write enterprise Java that's written in Scala I've seen that and I'll talk about that a little bit. There are various things that some of which will be pretty obvious to this crowd, you know, like the virtues of FP over object oriented programming, but also talk a little bit about where I think people get carried away. You know, like trying to treat everything as meaning to be typed and so forth and and you know where you strike the balance there, and just the idea that Scala helps promote reduction of code, which is a really great value. And then I want to talk a little bit about the future that just sort of a worrying trend about you know whether FP adoption is going to stall or not and why I think that that might be the case for a couple of reasons, one of which will be obvious. The other one may be less obvious and I'll talk about that a bit. Okay, so on the subject of greater clarity. One of the things they did. This is something Martin Dursky the creator of Scala really wanted to do is he introduced this new indentation or significant indentation syntax like Python like Haskell where you get rid of braces it's it's optional. You don't have to do this. You can use braces if you want. When I first saw this I thought this was really a bad idea because you know a lot of people complain that Scala is too complex too many ways to do things. I don't think very fairly actually, if you especially if you look at Dursky says this all the time. The grammar of Scala is actually simpler than the grammar of Java because Java has a lot more special cases. Like one of those things is kind of a gratuitous difference why would you do this. But I decided to go ahead and use the syntax in the book and I actually really came to like it by the end that it just adds just a little bit more clarity Scala is already really well known for being concise. And, you know, let me write quite a lot of functionality with relatively little amount of, you know, characters, and this just takes us to the next level of that so you know just hopefully you can see from these two examples that, you know just getting rid of that actually does actually add a little bit more cleanliness to your code so I've actually come to really like this syntax. But one of the big things, you know maybe more profound improvements is really trying to get rid of the need to memorize idiosyncratic uses of this powerful mechanism called implicit and replace or at least complement those uses with more directly applicable abstractions. You know Scala you know about arrow associate this is that famous type that's been in the library for a long time, where, as you see in the bubble if I write a arrow be like it's not actually something built into the grammar when I want to create a two element it actually invokes this implicit mechanism that converts that a object into an arrow a source object or but you know basically wraps it. And then this arrow method is called, and that returns a tuple with a on the left and whatever you pass as an argument that the b value on the right. And that returns a two element tuple. And this is exactly the way it looks. I think I copied it faithfully out of the source code for Scala to 13. Now this is a really cool mechanism we used it a lot but there are two problems with it one of them is if you're learning Scala. This makes no sense at all you kind of have to just learn this is the idiomatic way that you do what is essentially extension methods that are truly available in other languages. And the other thing is there's this sort of overhead of wrapping this object now maybe that the compiler can actually optimize that away but you know in the naive case you would be just creating these little throw away objects to the wrapping. So Scala three on the right hand side finally adds true extension methods, you know this is what the syntax looks like, you know extension for some type a then define this method I actually use tilde greater than instead of using arrow, or you define arrow, greater than, because it turns out Scala three, in part for backwards compatibility compatibility does use the Scala to 13 library so in fact arrow sources still in Scala three. And that's actually what gets invoked if you use this mechanism, but you know eventually that'll get replaced and it'll be just a regular extension method written like I did on the right, much more concise hopefully easier to understand much more directly applicable to the the idea that you're trying to create here. And so this this idea of using implicit generally falls into the category of contextual abstractions you know I'm trying to do something in a particular context. And there's been other things that have been added to you know make this work. So let's look at some other examples here. So this is a functional programming conference I have to have at least something that looks like a category theory concept so on the left I defined a trade called semi group semi group is just the abstraction over, you know integer addition or whatever. And then monoid extends that with adding the notion of a unit value and you know integer addition it would be zero integer multiplication it would be one, so that zero plus x equals x that kind of stuff. But notice the syntax here. So I declare this this trade you know this thing I'll use as a mix in or a superclass. And then I declare an extension method in the trade because I want every single object that is a semi group to have this unique ability to basically do addition. And what target name is about on the previous slide target name is an annotation that actually tells the compiler what name to use for this object and JVM bytecode, or this method rather, you might recall that Scala let you use things like these operator symbols and so forth which are not legal in in Java itself, or in the in the bytecode standard. So this is actually the name that would show up in bytecode now you can actually you cannot call this method from Scala using PL us, but you could call it from Java if you wanted to so it's the purpose for this annotation target name. But anyway, so I've got essentially two extension methods here, one of which is this sort of Darth Vader operator, you know, for plus, and all it does is turn around and call this combined method, which is another keyword you now have to declare things that you want to be able to use as infix operators if they are not using operator symbols so combined is using ASCII text. Another area where they felt that Scala got a little ahead of, you know, there are people used it in ways that maybe wasn't really appropriate for comprehensibility and avoiding compilation ambiguities is the overuse of infix notation, where we just drop the parentheses and drop the parentheses. So now, if you have an ASCII type, or an ASCII name of something, and you want to use it in an infix context you have to prefix with this keyword infix. Now there is some backwards compatibility things we've all done this with a standard, you know, collection operators like map and flat map and so forth, and it would break too much code if that wasn't supported going forward. So this is where they're sort of grandfathering in the ability to do this, but as a rule, it's another example of trying to make our code a little bit more precise in the sort of idioms that we're using. So, so keep that in mind that that's why the infix is here it's a new keyword, and that's the method that has to be defined in concrete subclasses of this trait, along with unit. So the unit method in monoid versus the combined method in semi group is that combined is defined as an extension method. So it becomes an instance method something that's applied to instances of a semi group. But we really only need one unit method per type, like integers floats begins whatever. So that's actually going to turn into a is like the companion object method, and so it's not declared as an extension method. And the bottom example then is actually a string monoid. The way I declare an implicit version or implicit instance of something now is with this new given keyword. So given some string monoid, then it's going to subclass. Actually, it's I'm declaring an instance of this thing I'm not even creating a subclass. So it'll be a monoid of type string and now I define units and the extension method combined in the usual way that you would for strings. And then on the right hand side you can you can see what it would look like if I actually use this, this operator obeys is the utility rules for addition, not always true for all monoids that that's true in this case. And notice, notice how we reference the unit object we call it like a companion object method or her member string monoid unit in the bottom case. So, you know how would I get like type parameters in this so numeric is a great example I don't want to declare. I don't want to have a given instance for every single type of thing that I can do addition that's like you know floats doubles begins and so forth. I can use numeric for this. Similarly with Scala this T colon numeric basically means that there's some implicit instance of a numeric that's going to exist for the type T I'm trying to use. So, there won't be one of those for say, you know, the class of user or something and unless I actually define a numeric for user types, but there will be in the library already for and begin and so forth. So, a little bit similar to what we saw before. Now we use the summon method, if you, there wasn't implicitly method before in Scala to the it's basically the same method really just a new name. Actually, both of them are still there. So summon says all right there was that instance that that made me able to declare this numeric object for some type T. Now grab it because I need to get the the zero it was what what it's called in numeric. That's what I'm using as my unit here. And similarly when I do the combined method I need to get that numeric instance that's in scope, and then call plus to actually implement my addition operator. On the right what happens, I don't have to declare anything else it just magically works now for ints doubles and big ints. And you can see how I reference the unit values where I have to now supply, I actually don't necessarily have to supply the type, like enter double. In the case has written it would actually infer that it's an end, but on the bottom one where the double comes first I have to put in the double they're just you know the little bit of type inference warding if you will. And then finally, another common use of these implicit was to pass context like, let's say a session object when you're doing your web processing or something. So I just made up a very simple example of some trade with some information that's contextual. And then I'll have a particular implicit instance again with this given keyword that will be one of those instances and I declare the value that's returned by string to be cloud exclamation point. And the way this is normally used in Scala is when you see this using clause now that would not have to be provided explicitly when I call this process method. Now instead of calling it implicitly or implicit rather some argument or arguments. We now use a new keyword called using probably should put it in yellow type because it's a keyword. But notice I don't actually have to name this this object. This context object I can just use some and once again to fetch it when I need need it in order to get this this info string. It is obviously very trivial contrived example but just showing the analog of how we've often used implicit argument lists to provide contextual information. An example of how it might work where by default it's you know when I call process, it's going to return AWS dash cloud exclamation mark. If I create a new given instance and in this context I'll actually shadow the other one that I declared on the left hand side. And now when I call this or if I pass it explicitly then I'm going to get this new string is output. So, just how that you know example of how that might work. In general though they thought about or how do people actually use implicit or how have they been used in the past let's actually put some mechanisms in the language that are more explicitly about those contextual abstractions. So they're easier learn easier and simpler to write and debug as a result and also just more intuitive. Let's talk about changes to the type system. And I won't go into a lot into all of them there's quite a lot of changes here. One of them that's kind of interesting is something called opaque type aliases. Now, this solves a problem that was solved with the previous mechanism that still exists called value types where you're supposed I have a conceptual idea in my domain like logarithms, but in fact they're implemented with just a single primitive type in this We don't really want to be creating object wrappers all over the place I can have millions of these logarithms and you know like a big data app and I really don't want all of that, you know little wrapper stuff in my heat it's going to slow things down use a lot of memory. At the bytecode level I'd really like the compiler to just use doubles everywhere and substitute in you know the method calls is required or whatever. There has been this thing called value types where you can declare these value types had some strengths and weaknesses. It turns out this type alias is a complimentary mechanism so value types are still there. But this one is sometimes useful in ways that value types aren't I won't go into a lot of the details here. One of my blog posts discusses this, but just to you know how you would declare this is you declare it is like a regular type alias inside some object but with this opaque keyword. And that means that it's you know what's inside this logarithm type wrapper if you will is invisible outside of this object called log log was a bad name choice I guess. But, but you do have to define methods for converting doubles to logarithms and working with them, you know is like using like the operators that we know and love. So these first two methods that I define apply and then safe are basically constructor like methods that return, you know a logarithm object. The second one being one that will check that you're not passing a negative double because that would blow up to minus infinity. So in this case it returns an option wrapping the value or none. And then we use extension methods to define all of the instance methods that we want to be usable and logarithm like plus and multiplication. And also just extracting the double back out so a nice mechanism that gives you that runtime efficiency of working with primitives but lets you think about domain abstractions like logarithms. And then the last two things I'll talk about on the type system are true intersection and union type so that the type system behaves more like or follows more that the rules of set theory, like for example these two types are actually interchangeable or rather intuitive. There's certain ways in which they aren't which I won't get into. But, but the something that's resettable and growable is considered equivalent to something that's growable and resettable. So notice the function at the bottom first it all I care about is I want to pass in something that I can reset the contents, in some sense like you know it's a mutable collection or whatever, and I can also add stuff to it and the stuff and the stuff inside this other strings. But I don't care what actually this thing is what the particular type is I just wanted to have both of these mix and traits as part of its type so that I can call reset, I can call add to string of course I could call in, in any case, but that's that's the idea that only values that are both resettable and growable can be passed here. And it adds true set theory to kind of behavior, like commutativity of these, this composition, it didn't exist in the previous way of using extension extend and with to mix and traits. This apparently gets rid of a bunch of warty situations where you have a runaway type expansion. And complement would be union type so notice the signature of get user I'm going to call a database I'm going to return some instances of this user type at the beginning with the name and password. But notice what it returns it's either going to return a string, which will be for the error situation or maybe I didn't find anything, or it's going to return a single user or it's going to return a sequence of users if there happens to be reused IDs. Of course, I could have just returned a sequence and made it empty if it's, you know, nothing there or, you know, a sequence of one, but just to illustrate how this works I, you know, use this construction. So inside this method I'm going to call my, you know, dbc connection with, you know, some query that just selects all of the users for this ID. And now I'm going to match on the results and either return a string that I didn't find any or return the first element, you know, converting whatever this results that record is into a user. I'm sort of glossing over those details, or map that result set into some sequence of users. And if I get an exception then I'll just return the message for the exception. Now, because I'm returning one of three types when I actually call it at the bottom here I have to use pattern matching to determine what I get. So it's either going to be a string again, or a single user or a sequence of users. And so that's how you work with these return types. You always just use match clauses. So another cool thing that you can do. Once again the types are considered commutative. So if, you know, if I have an instance of string or user or sequence of users it's considered equivalent to an instance of or type equivalent to user or string or sequence of user etc etc. All right, let's move on to kind of lessons learned from 15 years of Scala. You know, first, kind of preaching to the choir here obviously the benefits of object oriented programming were kind of superseded by the benefits of functional programming most importantly, the idea that we should really emphasize immutability and not have unconstrained immutability which is what object oriented programming, at least in the naive sense, allowed us to do. But it also really, really promoted more concise code and I think a great example of this is to think about SQL. It's hard to think of anything more concise than the SQL query, in most cases, and it really is a great example of taking sort of, you know, mathematical rigor and boiling it down to its essence in this case it's more set theory you know maybe like category theory or whatever, but it really is a very concise way of expressing what I want but most importantly, not telling the system how to do it but telling the system what I want in sort of a logical sense and then letting the system figure out how best to provide that result. And as we all know, you know databases have really good query optimizers, they often are indexed and so forth so this is usually a very fast operation compared to me like, you know, iterating through a source file and finding the things that I want and so forth. Data file, I guess. Another powerful idea that we've leveraged a lot in Scala is the notion of parametric polymorphism. And I just want to give you a sense of what this is but this blog post here. I talk about it a little bit including some things that occurred to me that maybe aren't the most obvious benefits of this. But consider these two functions you have no idea what these are doing necessarily because the names are completely opaque. So if you think about the second one first foo to, if I pass in a sequence of integers and return and if there's quite a few implementations that would satisfy this signature, like the return the first element the last element you know the median, the one in the middle, or maybe you know the mean rounded up to the nearest integer, or the size or something like this this just it's not very well constrained. The signature does not really constrain what's possible. But the first one actually does, because I don't know what the type T is. I really can't do anything with this method except implement the size method essentially so foo one is really the size method. That's the only thing that makes reasonable sense here. And it's interesting kind of paradox that by making the type less specific you know T instead of hints, we actually constrain the allowed possibilities for this method. And that helps us be more precise about thinking about the relationship between abstractions that are public and implementations that are internal. So we're less likely to be surprised. We're less likely to have, you know, unexpected behavior or buggy behavior whether, you know, however we implement it. And it's just a very powerful capability in terms of constraining us in a way that actually produces better quality down the road. So I really like this idea of parametric polymorphism, and I encourage you to look at it more detail. I want to talk about something I've seen a lot in enterprise code written in Scala, and that is the notion because we have this powerful type system we should just type the hell out of everything. And the specific problem I'm talking about is some code I had to deal with not long ago that was basically, you know, YAML files for submitting pods to a Kubernetes cluster, where instead of just using a template and then like stuffing in the few files that I have to stuff in my Scala code, the code I was looking at would just faithfully represent all of this structure in Scala code, effectively duplicating all of this knowledge, having this, you know, a problem of maintaining two versions of the same and not really adding any value over just having a template and having the Scala code understand just the few things that it needs to be able to substitute into this template when it actually submits a job in a Kubernetes cluster. My point being that sometimes we kind of fall into this really it's an object oriented pattern of oh we should type everything we should have our entire domain represented in source code. So we really should not we should only have the bare minimum that we need to express what we really need to say, and nothing more, and find other things other ways to express the rest of the information like you know templates that where I spend all my time making sure this template is right, not making sure the template and the Scala code are right. This dramatically reduces boilerplate in our Scala code and it eliminates all this duplication and so forth and so forth. So if you fall into this temptation, really think twice about when you really want to use static typing for its benefits and when you should not represent an idea in code but maybe keep it more abstract, like use a hash map, or, you know, some some wrapper around YAML or JSON, rather than immediately convert that into some object that you've declared, or some type you declare. So if we do this, then, and we leverage the concision of Scala, then we'll just have much less code and we'll avoid just translating our thinking about enterprise Java into enterprise Scala, which doesn't really give us that many benefits. And the last example about this point I'll make is that if you think about, this is one of my favorite examples that I talked about a lot about five years ago or so, when I was doing a lot of spark programming. And an algorithm called the inverted index, it's sort of what a search index is based on in a very crude sense. But let me check the chat here quickly. The idea here though is that if you write this in spark, we get to leverage all of these functional combinators that we really love like map, flat map, you know, split is not exactly but sort of reduced by key and so forth. And so we can write this whole algorithm in one page of code. And the thing is, if you really embrace these functional idioms, really go for the most concise way of thinking about the types and what you need to express and how to work with them. Then you dramatically reduce your code and when you do that then suddenly everything gets smaller. You don't need dependency injection anymore. Gosh, if I dealt with problems with that. You know, crazy mocking libraries for for everything because you haven't managed your dependencies very well. And you can really get rid of a lot of the design patterns that are really, you know, use properly that they're good things but when you'd have a lot less code you have a lot less need for stuff like this. And you can actually, you know, live with smaller numbers of services to my blessing I want to talk about quickly is this problem of and I'll answer this question at the end about commutativity that came up. Here's a here's a risk item that I see that's going to affect the growth of functional programming. You know, if you look at things like the Tyobi index, what you see is that, you know, some of these popular languages like Python and go and Kotlin, they keep growing in popularity, but they're not really as you know strongly functional as we might like right. In fact, I really hate working with Python even though I work in the data science space a lot I just feel really disabled in the sense when I'm working with it, because it just doesn't have the same kind of functional power and concision that scholar does and yet it's you know it's popularity has been growing enormously. I think Python is the light blue color that's, you can see it, according to Tyobi it's the number one language right now. Alright, so why is this happening well we know one reason it's kind of obvious but just to state the obvious. For a lot of people functional programming is at least perceived as too advanced, even though they're actually using some of the constructs that have worked their way into languages like Java and Python and so forth. I think developers that see the perceived is too hard or they lack the motivation to learn it. Whereas in contrast, object learning programming does have this kind of seductive quality that it seems intuitive at least in the naive sense. I remember people saying this back in the day, just take your verbs and your nouns and drop them in your code right model everything in your code. Well it turned out we had a lot of problems with that including unconstrained mutability and this issue I mentioned earlier where we put stuff in code that really wasn't there, where we should maybe use templates or something, you know less formal, you know, like hash maps or whatever. But I think the more interesting situation is that software development itself is changing. If you think about how we write software it's still kind of a craftsman's business, a lot of the code we write is code that's been written before but we just rewrite it with some minor changes for our needs. And so I think, and I really didn't like these terms that there's only things I could come up with there's kind of two kinds of programming one I'm going to call full stack and the other I'm going to call service oriented. So, but I do want to emphasize it. I'm not saying that either one of these is good or bad that they either both of them apply in context where they make sense and they can even exist in the same relatively large and programming environment community whatever. So what I mean by full stack is the case where you think about I'm just going to like do something that I'm going to pick maybe a big framework like rails or I just picked rails as an example because I used to do rails programming. But mostly I'm going to write a lot of the logic myself. My day is going to be spent thinking about the domain logic that I need to implement in this system, and I'm not going to spend as much time thinking about deployment and production monitoring. Maybe I'm building an app for an IT environment where I just need to manage it on a few servers or something. That's not my main problem. The main problem is just getting this complex domain logic right. Maybe I'm writing accounting apps where the rules of accounting are somewhat complex. And I actually think though that functional programming is still the best way to do all of this. I mean the kind of concision you can get the way you can get a handle on the logic and express it concisely. This is really the still the growth area for functional programming. However, the other place where we're seeing a lot of interesting change is what I'm calling service oriented and that is that we're finally standardizing a lot of the stuff that we used to roll by hand in sort of a craftsman way So Kubernetes, whatever you think or dislike about it, the main value of it in my opinion is that it's actually just defining the standard for a lot of stuff that we often do. Like, how do I roll stuff out? How do I roll it back if I want to? How do I discover services? How do I load balance? How do I orchestrate storage over a cluster? How do I manage secrets and on and on and on. And that's what we're seeing grabbed from the Kubernetes website on the right here. So, a lot of my time these days, though the project I'm working on is thinking about how do we stand up, highly resilient, highly scalable, highly durable, highly flexible clusters, and, you know, provide the services that the, in this sense, the scientists who are writing these fancy toolkits at IBM, how do I make them as accessible as possible to their customers and as reliably delivered and all that kind of stuff. So when you're writing code like this all the time, you know, why use anything other than like go or bash even or Python or YAML, because, you know, you tend to write small stuff. You're not really writing big complex logic so much. A lot of the sophistication is being handled for you by these standardized libraries. So maybe functional programming isn't as important to you. And I think this is one of the reasons that we're seeing maybe a stalling of the growth of functional programming and traditional languages are, you know, learning some of the lessons there and mixing in some of the ideas, but not really embracing FB. And I'll cite one other important example of this, I think, and that's in the data science world. You know, one of the reasons Python is doing so well, again, is because it's so popular among data scientists. And if you look at what data scientists write, they really don't typically write a lot of code. They do have to have a lot of domain knowledge about what's the right statistical algorithm to use or I don't, you know, how am I going to apply neural networks to what I'm doing. Their expertise is not in programming. It's in this kind of, you know, well, just, it's just in data science. So once again, they're kind of doing the same thing. They're basically scripting in Python or R to do, to drive the behavior of these very sophisticated toolkits like TensorFlow and PyTorch and SK learn or scikit-learn. You maybe they're paying some penalties like this tweet that I posted here where, you know, sometimes you run into, you know, the fact that Python doesn't have strong typing or whatever. But nevertheless, for them, it's the perfect fit or near perfect fit. And so I think this movement towards finally figuring out good abstraction boundaries for a lot of commonly needed things like running clusters and all the services there, running, you know, data science applications using very sophisticated toolkits that where you kind of really hopefully still need functional programming to implement. It's kind of driving a lot of us to not really need the power of functional programming as much. So I'm a little concerned that, you know, maybe this is going to decline over time, or at least it's going to stall out the growth of functional programming. So just to wrap up then, you know, once again, I think that this is a risk to us. It's actually a good thing that we're doing a standardization, but it does kind of threaten the growth of functional programming a little bit. Okay, thank you very much. I'll go through the questions now and I'm happy to also chat in the session or the meeting room afterwards. So the first question here. Someone is asking about commutativity of a function object that he saw in the first few slides. I remember that it had a logical and between the two parameters and what it did. Okay, basically, it used to be in Scala you would say like, you know, my service, or maybe my collection extends resettable with what was the I forgot now, growable. Okay, and it turned out that you could get into problems where maybe I happened to declare a collection where I mixed up like or change resettable and growable to be growable first then resettable. Logically, there's no reason they should be considered differently. I'm kind of mixing in unoverlapping behaviors, mostly. So why should they be treated as logically distinct just because I declared them in one order. I'm over I'm just kind of ignoring a big exception to this rule for now but but the idea is that in set theory it wouldn't it doesn't matter what the order of things are. It's either in the set or it isn't, and I can talk about you know overlaps of sets, you know the union or the intersection of sets, right, I'm not thinking at all about ordering of the objects in the set. And so that's the idea here is to bring more set theory into the type system by replacing the with keyword with the ampersand for and, and also supporting the idea of like an extension of the either type or instead of either two types, I can have arbitrarily many alternative types that are returned by some method. So that's the idea there, and they are commutative, at least, except for one big exception which I won't get into. We can talk about it in the little chat room afterwards. So someone's asking what the, what the code means in terms of again kind of this commutativity. And they bring up a good point that if I if I mixed up. I didn't mean commutativity in the sense that the order of methods that I call is independent what I meant is that from the type theory point of view, those two types that are declared as acceptable and resettable or declared as resettable and groble they're considered equivalent from the point of view of the type system. Actually you're getting to the exception I was sort of mentioning how you call methods and how method resolution works is not commutative. But from the type theory point of view the objects are commutative so that's what I meant there. The last question here is, do you foresee that using languages like Scala and Haskell with type oriented programming might make them interface better with AI tools like co pilot. I actually think that would be true I put in that little tweet grab, in part because people do commonly run into type problems and it would be better if they could express things. They actually make optimization work better. In fact, most of those toolkits that are written in Python, often have these sort of side tools they've written to highly optimized the code they're not really going through normal Python so much is going through a highly optimized query planners that kind of be equivalent there of, I think that could probably be easier to do in Scala or Haskell. In Python too almost nothing's done in Python, at a certain level you drop down into C and C++ code for performance reasons. So, yeah, I think you could make a good argument. Actually, Haskell might even be better than Scala because a lot of people don't want to run JVMs anymore. So if we actually like used Haskell for everything, which actually can compile to very fast code, it might actually be far better in the long run. My sort of my hope is that Julia will be successful in the long term it's it's another language that kind of could replace R in Python, and it solves a lot of these problems. Okay, a few other questions and then we're going to be out of time I guess. Is Scala 3 ready for production. Yeah, it is you can you can use it now it's it is very robust and they went to a lot of effort to make it backwards compatible with Scala 2 either I mean in the syntax level there's flags you can use to decide how much of Scala 3 versus Scala 2 syntax you want, and the libraries generally interoperate if you compile to with Scala 2 13, and even with Scala 3 you can mix and match across those boundaries. And then the last question I'll take here is do you think that if he could be used for ML and AI. Yeah, it's really a challenge of, you know, once again data scientists are thinking about data problems and statistics, you know, a lot of them come from that background they really want to think. They don't want to think about programming very much a lot of them are good programmers but they're really much more concerned about, you know, doing programming, you know, doing data science, and the, and the, again, it's just a. You know, the objective the appeal of a very simple looking language like Python, even though you often run into roadblocks where you can't really do some of the things you'd like to do in a more sophisticated language. And the classic problem that you know beginner versus expert, you know. So what they typically do then is they just rely on the tool kits to do most of the heavy lifting, and they just kind of script what they want in their notebooks or in Python, or are is kind of similar. Thanks a lot Dean there was blazing through a lot of content, but I think, yeah, great, great summary of some of the things that is there in scholar three and was interesting to see your thoughts on basically, you know, the two types of kind of programming jobs if you will, you know, and where you think FP may shine still continues to shine and where, you know, service oriented might just be good enough where you could do so that I think that was interesting. I think we've run out of time but again I want to thank all the attendees for joining in and thanks Dean for taking the time this morning to be with us.