 Now, an argument based solely on analogy is not a very good argument because analogies can be flawed. So in the second part of this overall argument, the overall point I'm trying to make, I'm going to make the argument by example. This time around we are actually getting into programming languages and I'm going to show how a few things are done substantially better or worse in different programming languages. Here we have an example of an extension library written in F-sharp. For those not familiar, F-sharp is a primarily functional programming language meant to be used on top of the .NET platform. And what this extension library does is provides operators that work on arrays of various types. In fact, the way this extension library is written, it provides these extensions for any compatible type, even if they haven't been written yet. This library does not have to reference those types. As long as with the given signature it can construct the appropriate function, that function will be provided. This is sort of an extreme case of generics and it's possible because of the way F-sharp's type system works, both in how it infers types and in how it describes types. See, the type here, there really isn't a type and so as long as it can construct, it does. Another important thing about the way F-sharp works, using this type definition, is that if it can't construct the declared function, it just discards it. So if we consider the example of an array of integer of 32 if you want to be specific. Obviously, things like assertion, negation, addition, subtraction, exponentiation kind of make some sense. I don't know why you'd ever want to use that one, but all of these and more would be able to be provided because they work on the base type of integer or 32. Something like binary inversion, or is that this one? I forget, both of these are a type of bit inversion though, or Boolean inversion. Both of them are inversions, we'll leave it at that. Since that operator probably does not exist for integers, I'm pretty sure it does not. It just simply will be discarded and you won't get any errors about it. Conversely, if you have an array of Booleans, these, as well as things like binary AND or OR, will be provided where things like assertion, negation, addition, and so on, which make no sense for Booleans, will just be discarded. The entire extension library is provided in this one file with very simple one-liner definitions. I was originally writing this library in C-sharp, just as a sort of learning experience. Let's look at what the C-sharp looked like. Bring over the GitHub repository, which of course has the older files. You should already be able to tell that there's something extra going on here. First of all, we have split definitions. We couldn't combine all types. And the reason for that is that the way the type system works in C-sharp, specifically for type definitions, we can't provide a truly generic, universally generic definition that works the way I intended. And because of another, I guess, deficiency, we can't actually add operators for array types. Well, okay, we can't do that because of the issue I just mentioned, so they're very tightly related issues. Essentially, I'm just sort of expanding on the previous reason. So what we have to do is declare an entirely new type, which just wraps up the integer array and provides the extensions. Now, there is a few extra things in here. Like you can see that there's a mean, median mode and some that I don't have in the C-sharp implementation of the extension library. But a lot of this is literally just mappers to increasingly make this appear more like just the normal array. And in fact, there are here implicit and explicit conversions to further hide this fact. This is clearly not a good approach. These largely accomplish the same thing, but because of how F-sharp works, this is considerably easier to define in F-sharp. Now, this isn't to bash on C-sharp or praise F-sharp as some kind of superior language, because there are also things that C-sharp is able to express considerably easier. I don't have an example prepared, but the explanation should be obvious enough. In F-sharp, files need to be in a certain order and C-sharp doesn't care. So if you have definitions which circularly reference each other in a way that they're not circularly dependent, but there isn't a clear order in which to read them, writing that kind of structured code in F-sharp is remarkably difficult and requires splitting implementations across multiple files, whereas in C-sharp you are not forced to do that. It's a huge advantage. So again, this is just different languages do different things better and worse. But let's move on to... Oh, actually, as I explained, this is sort of a learning experience for me. This was C-sharp, now F-sharp library. Sort of a learning experience because I actually have this as part of the mathematics library, which is a collection of AIDA packages. In AIDA, the largely equivalent code is... We go over here. This is much more developed, so there's some unique things like comprehensions. But for the most part, all the operators in a generic fashion are provided here. And you can see I have a little bit of an easier time implementing this in AIDA than I do in C-sharp, but clearly do not have the brilliant succinctness that F-sharp was able to provide. Point should be obvious. So let's move on to the other side of this package or implementation, I guess. There are two different packages. You can see the operators directly. This is where all the generics are actually implemented, instantiated, rather. Now, I'm going to just... We'll just go over here. I'm just going to do up some mock-ups. We can ignore those. I still want to save them, but we're just going to ignore those. Actually utilizing the extension library in the way that I had implemented it in F-sharp looks largely like this. Or, nope, not commas. That's the F-sharp example. You can see very succinct. There's no extra stuff. It looks largely like you would expect in math. And that's sort of the point behind functional programming languages. They are largely just mathematics-based, influenced, however you want to put that. They're very good for math-based tasks. The C-sharp example is... Well, let's be fair and use var there. You don't really have to type it out because it can infer those. But then if we do... A plus B times... And just... Numbers. That is... That is disgusting. There's really no other way to put that. Clearly, the way C-sharp works is not good for this type of task. So even though these use the same underlying engine, they both .NET runtime-based languages, the syntax of the code is considerably different. One of these, the F-sharp one, is clearly way better suited for this type of task. And the add-a example, not writing out the whole file, just a little snippet that's relevant. Not semicons, though. You can see that at least as far as using the library goes, the add-a example is very similar to the F-sharp example. It's a little bit more verbose in that you need to include the type. But this is very succinct otherwise. And especially the expression part is, for all intents and purposes, identical between the two languages. Now, I'm not going to have as many examples for the next... Example, I guess, really. The different examples of different things, but I'm not going to have as many examples for the next thing I'm going to show. Most programmers are going to be very familiar with exceptions in how exception handling works. It's an obvious approach to dealing with the inevitable errors that occur in many programs. It's not actually the only approach you can take, though. And one of these is goal-directed execution. As far as I know, introduced with Icon and then greatly expanded in Unicon, which are two languages that you've probably never ever heard of. Icon is sort of an enhancement upon Snowball, although its approach to pattern matching is a bit different. Unicon sort of combines Snowballs and Icon's pattern matching capabilities. I'm not going to delve too deeply into that because it's not really that relevant to this video. I just wanted to give some background information on these. But the whole language group is very good for text processing, so the example that they have present here should not be that surprising. Actually, we'll cover this little snippet right here. The fact that you can actually write something like this or like this in the language is nice syntactic sugar. It's not a huge thing. You can do without it pretty easily, but it is nice syntactic sugar. And I do actually have a video where I sort of implemented this in IDA, but because of the syntax differences, you had to use some parentheses inside of this part to get it to work, which is less than ideal in a proper goal-directed execution language. You don't need the parentheses because it has an obvious flow. But this is where it really starts to get good. Read operations can always throw an exception because no amount of even static code analysis can promise you that the file you're trying to read exists. It might have been deleted. It might be corrupt. There might not be anything to read. You might be able to open it, but there's just literally nothing to read. It's an immediate end of file. There's a lot of reasons for a possible exception. And you can always put in an exception handler and handle all those. That's what most developers are going to be familiar with. But in goal-directed execution, the read function can just return a fail state that is then used in a greater operation. In this case, if it cannot assign a to read because there is no else part, it just doesn't do anything. That is, if there was nothing to read, there's nothing to do, there is no exception at all. If you were able to read something, if a was able to be defined, then it just simply writes it back out. And this can very conveniently be put into a loop through this greater statement just while write read. Quite literally, as long as read passes, then there's a value to write. If read fails, then the fail state is passed to write because write has a fail state, it passes the fail state back up. This is similar to exception propagation, but there's not an entire exception with an exception stack. It's just a simple fail state value. This approach, while probably seems really elegant from this example, does have its flaws. Exception handling is really something you're going to want to be doing with system or like critical systems. If you are trying to use goal-directed execution to write something like a subway system control, please stop. You really want to use exception handling for that. But the point is that there are all these different approaches that excel at certain things and trip up at certain things. We have many different programming languages for the same reason we have many different spoken word languages. It's okay. It's a good thing. The only issue is with people treating the language like a religion, trying to convert others rather than just trying to solve problems. They get like this by thinking that the problems their language solves are the problems other people experience and it just isn't true. We all favor different languages because both it fits our way of thinking better and it fits the problem domain better. Very, very rarely have I seen people use poor languages for the tasks that they're actually dealing with. But when the only thing you look at is that they're using a language that isn't the language I use and you don't ask questions, you just immediately go at them. You're like the Jehovah's Witnesses of programming and it's ridiculous.