 Let's get this panel started. I see Renzo is here to help me out. Yeah, I'm helping I see Leo is connected. I'm here Thank you. Ben, are you here with us? I heard the microphone amuting. Can you hear me now? Yeah, there we go. Welcome, Ben. Thank you. Thank you. Let's start off with an icebreaker for you, Ben So do you have anything interesting or curious or quirky you'd like to share about your surroundings or what we've been doing recently? Recently, I've been following down the paper trail of papers behind the mini-canon, which has been really interesting. So I've been thinking a lot about streams and And things that I feel sort of like a dog watching TV, like, yeah, it's interesting. It's fascinating but kind of I feel like I don't fully get it yet, but It's also interesting to see how it all connects and congeals together and how, for example, the implementation of streams there connects with Leo's work and how that connects with reactive streams, which I also briefly mentioned in my talk Sometimes it feels like it's all connected What about the little books? Do you have it already? The little book? The little books? Like the little schemer? Oh, no, no, but not yet. Okay, okay, let's prepare for that. Thank you. That was an interesting book to share Same question for Leo. Is there anything like interesting you want to share? I'm going to share something about my environment So it's a product that has been on my desk for many years now Here it is. So it's a paperweight that I use as a rubber deck So it's not made of rubber and it's not a deck, but it's watching me all the time and it's been a very good compiler so far Nice or creepy depending. Maybe it's sending you energy I guess The spirit of Minerva is giving you some of her wisdom Nice Okay, let's start off with a question for Ben I think you've actually answered a little bit of this already in the discord channel So Jack was asking about He's used the async profiler and flamethrower but like found it quite hard to interpret and I imagine there's kind of like kind of a lot of experience or other specific techniques you can use to actually understand the results of all this performance and profiling Yeah, one thing to understand about performance tools in general that like any tool you need to Both get some experience with them and you develop intuition as you go along and you also need to Like get things wrong Sometimes right Unless you do the wrong stuff and get bad results or results that don't make sense. You won't know When you're doing the right stuff, so it's usually enough to go over the the documentation and there's a lot of documentation both regarding CLJ async profiler and on flame graphs brenton greg has done incredible work on it and he keeps on writing and providing material And the the theory of flame graphs is not It's not exclusive to that work in particular. You can find many instances of tools And and documentation for flame graphs today. So yeah, it's it's just experience and just use it until You get comfortable with it. And if not, you can always open issues to any of the respective repositories ask questions because I can say that for example on CLJ async profiler, alexander yakush is very responsive to Two issues So he's happy to to answer any questions Okay, and just a quick follow on from that when So when you are trying to interpret results, is it is kind of be very Repel driven or is it something you kind of need to go away and think about a little bit more about how best to tackle things It depends both on the nature of the problem and On the degree of interpretation you develop for that particular problem and this always reminds me Back in college when we started learning quantum mechanics and the lecturer told us He's shown us the results of some example and He said well, this should Make you uncomfortable and you should have no idea why this happens and You also have no intuition for it yet but as you get Used to it and you get and you start swimming in the material and in the mindset You do get used to it. So at the beginning you might need some, you know, hammock time or walk time or whatever you do to process the ideas but later you You'll recognize or learn to recognize Some behaviors or some pathologies and you'll be able to just make a like a snap Identification when you look at the flame reference say, oh, this stack looks really weird. I want to look at it And like why am I using reduced one for example, or why am I dispatching via the wrong protocol, which I didn't expect Okay, oh, yeah, that sounds great. Yeah, that's very useful. Thank you very much So we can get to a first question for Leo as well. So um, I'm going with yaku, but that by the way shouts to Yaku before making so many interesting questions. So it's very active. Thank you Um, do I interest right, uh, do I do I understand right that the functional effect system is quite similar to promises Where a thunk is a promise Par is a promised all and bind is a dot then But while promises start execution when created thunks only start that hit lazily when asked for their value Am I completely off so, um, yaku, you're not completely off Promises and effects are similar but different So as you said, um Things are lazy. So when you declare a turn it doesn't funny. It depends only when you call for it Uh, the other difference is features are memorized and thunks, uh run Whenever you call it them. So when you, uh, ask for the results for a feature, it doesn't re-read the results Otherwise that, um They both describe a asynchronous result. Thank you. Hopefully it was, um, um, explanatory for for yaku Okay, um, yeah, so I guess I have a question to Ben about, um, if somebody's like starting There's a lot of really interesting things you've covered and you said kind of building up experiences quite important But if you're starting from day one, I mean, is is things like time and criteria? Criteria may useful place to start do they are they a nice gateway gateway Drug almost to kind of lead on to the more interesting and more insightful tools that you've got Well, uh, let's say when you're getting started And let's say this is a new application Don't profile don't benchmark anything develop your application and once you have a living breathing application and You have solved all the rest of the problems such as you know blocking and the implementation details Then you should profile it and if you, uh, let's say meet your kpis Go drink milkshake. Don't bother really don't don't waste time Shaving nanoseconds of an application that meets its requirements but it is, uh, it is good to at least have some profiling results of the application for when you might need them later But don't go running to to optimize it before you need to And once you do need to then you need to start with for example, generating a flame graph connecting to it with visual vm to get a to it's like, um Studying animals you you don't want to study them on an operating table. You want to study them in the wild and Then when you when you get a a good picture of the moving bits bits and pieces and the hot pass and the pathologies Inside the application then you can focus on them and then it might be the time to uh to take a specific bit of code and a specific bit of sample input and then use criterion and to get the initial baseline benchmarks Try an optimized implementation and compare them but you don't just go around with the hammer of criterion and And start to be around your functions because you'll be It can get addictive so I can recommend against starting there. That's very wise advice. Thank you very much So going back to Leonel I wanted to ask you one question. So I think I briefly and is for me I think I briefly saw On your slides the amb operator and you also mentioned the previous heart I just wanted to you know showcase that also, um Jerry sassman on like design software design for flexibility Um, he's talking about the amb operator. I would just wanted to understand if uh, that is possibly the same he's mentioning McCarthy and Is quite interesting operator. There's like a long um A long explanation and it's quite interesting what it can do, but it's very simple I just wanted to understand if we are talking about the same thing Thank you. Well, um I cannot give a definitive answer because I've not read this book yet but, um It's from the same author As s.i.c.p. So I guess it's the same idea The inspiration for the amb operator that I as described is from um Book from just man. So I guess it's the same Thank you. It's probably the same. Thank you Alrighty, I'm going to take a question from rens. Actually, there's a few questions that ben's been answering in the discard Which is great. We might go revisit them later on, but uh rensa was asking about mechanical sympathy and if you have any thoughts about the jdk16 Introducing vector operations and how close you could take advantage of s i m d single instruction multiple data Um, yes or affectionately called simd back when we worked at and when I worked at intel um the most mechanical sympathy problems enclosure are usually solved by just using backing arrays which the core collections already do wonderfully and if you look at the vector uh api Jeps you'll see that they're mostly focused on numerical operations And most operations we do enclosure are collection oriented And data oriented and less numerically oriented. So while you can get some improvement there, I don't see how Yes, it's it's nice. You can do a lot with it, but there's a lot of ground to cover before you can connect that to collections Yeah, thank you, Ben. I think I I see it's quite like a low level and I mean if there are operations that can be actually parallelized in that way They are very deep into the compiler. There might be b to y operations or No, it might happen inside the implementation of core collections But definitely requiring some work, but I was thinking maybe To take advantage of that part somehow as a library But I still don't know how to produce anything useful with that. So, you know, just very interested in that topic if you want to To get a peek or a window into how that can interact with the jvm, there is a discussion in the simd json library Regarding that exactly using the smd api for the jvm and parsing json And relatively high throughput. There's a thread in the In the library there you can read the discussion. I think it's kind of stuck in that in the water in the meanwhile, but It will probably pick up steam in the future again Indeed indeed. Thank you for remembering that and I remember that the the fastest json parsing in the world is using exactly those techniques written in c of course and assembly whatever And yeah, it would be nice to like Yeah, as an exercise to bring that into pure java. It would it won't be as fast, but it would be decently fast All right. Thank you, Ben So I'm gonna now keep talking and asking you another Oh, so this question was for you already. So yes, I'm gonna pass it back to john. Sorry. I was confused by Me being the the the question make Um, did you have a question for leo brezzo on or that you thought that's why I'm confused I should ask the question to leo and I was asking to ben Um, yes. Yes. Sorry for that. Sorry for the confusion. Yes. So, um, we have another question from yakub Um, so how does missionary compare to core sink and its channels? Is it an alternative or do they have different use cases? Well, um, how does it compare to core async? Um I'm going to start with the similarities Uh, so they both, uh Implement this um inversion of control syntax that looks like a sink awaits. Uh, and they used both the same technique to achieve that Um Regarding the use cases, they can be boost to implement streaming the channel support back pressure missionary support back pressure in another way, but it's um, there is overlaps in the problem space Now the they are fundamentally different because it's not the same paradigm Core async is much more imperative It's also much less constraints. So there are many things you can you have to do yourself Uh, you have no supervision in core async. Um, That means you in general you have to program defensively because uh, if your good luck throws an exception then you need to Put it into a channel somewhere Um, so in missionary you get all of that for free and the price you pay for that is you the style is more constrained You cannot spawn a process uh Hey, whenever you want you have to compose effects together Okay. Oh, yeah, that's pretty much it's it's not the same programming style Okay, thank you for your explanation Back to you john Ben, I think you've already asked answered this question in this card I just wondered if you had anything else to say. So somebody was asking what mini caron Canron it is and how does it connect to stream programming? Is that something you want to comment on? Yes, um Mini canron is the logic programming library written originally in scheme by william bird and it is also the work behind core logic If you've ever used it and also the reason I am programming in lisp today because I was inspired by bird's talk at the papers with love Channel and that that's how I ended up here But the the interesting thing regarding streams with mini canron is that you need some method to implement a search of the option space when you're doing logic programming and the way it is implemented is Think about it as a sort of fair list comprehension Because if you for example have two infinite lists of options You need a way to fairly interleave them To search the option space. Otherwise you might diverge Delving into one stream. So you need both fair streaming fair interleaving and fair interweaving Which allows you to backtrack the search in case it fails and that is based on olag kesliov's work on the logic monad and he also did a lot of work on streams and operator fusion And if you read his paper on operator fusion, you find it looks just like transducers Which is amazing Excellent Yeah, that's a lot of fascinating work there. I just curious. Did did you have any questions ben that you wanted to ask leo while you're here? um well, I I can I have been pestering leo with questions for a while now, but I I do wonder if there are already any User reports and from people using hyperfiddle and not hyperfiddle. Sorry that that hopefully will come in the future too But people using missionary in production because I want to start using it actually Well, um, that's the problem with open source. You never know. Um, so I'm not aware of any explicit feedback that it's using production, but um It's definitely designed for production and it's uh, measure enough now I think it's there are some bugs. Uh, that needs to be fixed. Um, right now before Because there are still some, uh H cases that are not well defined and if you are not aware of them, then, uh, we're gonna have unexpected behavior Sometimes so that's the main thing that I want to fit before seeing it's production really So that's going to happen very soon. I'm pretty clear with that Basically, all of whatever I want to fix before Switching the production flag are listed in github.com so you can Look at that and if you have a specific question, you'll be able to ask One final question actually regarding Missionary, which I just thought of during your talk Uh, do you have any intent or plans to implement? Um either the sequence or reduce Interfaces from core closure on flows because currently it reduces Uh, you have your own implementation of reduce and it is a bit ad hoc But if you implemented the reduce interface You could reduce in princeduce overflows immediately. What would be the benefit compared to what's existing now that You could use core closure reduce or you mean with um Without blocking reduction function For example, yeah Because the purpose of flow is to be a thing. So if you want to use closure reduce to reduce of our flow, then you you need to leverage threads To block into the the values become available. Um, I have an implementation for that That relies on It terrible java long terrible Um, I've not decided if I want to add it in core. It would be uh, java only that's for sure um But I can show you if you want Later if that's useful to you, uh, let me know. Okay. Thank you And thank you both for that. That was very uh, very interesting Uh, Rene. So is there another question for Leo? What is your long-term goal with missionary? And is there a killer app? What my long-term goal with missionary? Um, I consider missionary to be close to done and I don't expect it to grow much I uh, I think it's a good foundation and uh, the idea is to build other things on top of that not to grow the library itself It's built on two fundamental protocols that don't depend on missionary. So if you want to, uh, make something useful that is compatible with missionary It can be a separate library. So The idea is to make an ecosystem grow around these two fundamental protocols. So that's Uh, the first part of the question now. Is there a killer app? The original motivation for it was uh, user interfaces so I would say in the front end the The goal is to have a better foundation to make better user interfaces with proper incremental maintenance and time-grain reactivity um, it's still uh, very much low low level to make UI, but um It's a good foundation Um on the back end side, uh, I would say it's the it's a useful tool to implement um distributed algorithm, for instance, because um, this program space you spend your time dealing with errors uh, retrying stuff, ending uh, value asynchronously, uh, deal with time stuff like that. So you need the right basis in supervision in that kind of stuff. So I would say that could be the main use cases Thank you So with that, I think we um arrive at the end of the questions that we collected so far. So thanks everyone for asking Nice questions to our speakers as usual. Thanks a lot. Thanks to uh, leo and ben for bringing their wisdom and putting the effort into um In their talks today, um, it was very interesting topics So far and the entire like a uk morning Um, thanks again. Thank you very much for coming. Thank you for being here at the k panel