 So, a few days ago, somebody had asked me, and it's hardly the first time, whether I think C sharp or Ida is a better language, and for those of you who have been following my stuff for a while, the answer to that is pretty damn obvious. Neither. There really isn't a language that is outright better or worse in general. Each of these are tools for different languages, and I had made up and started to record a whole talk on this, and it wound up being obvious. It was going to span multiple hours. I don't want to do that. That's ridiculous. That's way too long for this kind of thing. So instead, where—oh, I—oh, he's being all crazy, but he tried to jump up in between my legs and us, protect him from my goods, because ow, he didn't do okay. So this whole thing is going to be broken up into multiple different subjects, but multiple different videos, each with its own subject. But what I'm going to do instead of the huge single video is—and since I can't answer that question exactly—is just sort of point out what I think Ida and C sharp, as well as some others, does well and does poorly. Now it is going to be centered around Ida and C sharp. Those are the two languages I use more than anything else. There's going to be some talk about F sharp, as well as two languages most of you have never even heard of, seed seven and Unicon. Now for this one, I'm not going to get into any specific parts of the language. What I want to make a point of is what I think each language does well overall. Alternative way of looking at this is for a given problem domain, for a given niche, which ones are suited to which. So Ida is a pretty obvious one. It was made for embedded programming, specific types of embedded programming, and I think the most part it does that incredibly well. That is still, generally speaking, my preferred language for that kind of thing. Now I will say there's a certain level of complexity that is required to justify using Ida at all. And if you're not at that level of complexity, you probably don't want to go that route. The reason for this is that Ida requires the use of an entire runtime, as well as considerable portions of the standard library. That runtime is rather hefty, and there's a lot that goes into it. If you don't have an underlying operating system to rely on, that means you're implementing all of it yourself and can't just bind to equivalent functions that do it for you. Now this does wind up being beneficial in many regards, and essence, the Ida Runtime and Standard Library is the operating system, and you can take that across multiple different devices and get largely the same experience. That is actually incredibly useful, and is a major factor behind why I think Ida is good for embedded programming. But if you're doing much more simple types of embedded programming, it's typically not justified anymore. The complexity of developing that entire runtime is often greater than the complexity of writing your simple embedded control system. But of course, this depends on the level of sophistication behind what you're doing. If you want, say, a timer light-based shutter control so that you've got a little control module and sensor and motor that you put near the window and it winds and unwinds the blinds or winds or unwinds the shutter, or however you want to go about that, you wouldn't want to program that and edit. That's way too simple of an embedded device to be justified for that. Similarly, sensor networks are going to be way too simple, the Ida Runtime is going to eat up way too much power and it's not justified. However more complex control systems, like what you see in a car, Toyota is using Ida in its cars now. Satellites, those make sense. They're much more sophisticated machinery and many of Ida's language constructs are very well suited for those types of tasks. That being said, there are of course many areas where Ida does not do very well. In fact, outside of embedded programming, the only thing I think it really does well at all is mathematics and it is getting really good at that. Quite a few of the added 22x features are very obviously focused on math and will suit it very well. It's getting to the point that I would, after that release, as long as the compiler quality is good, I would consider Ida probably the ideal language for doing like hardcore math computations. Definitely putting it at a big competitor with the likes of say Matlab or Maple or others that are, you know, more specialized in that kind of thing. That being said, it needs some good support packages from third parties to really round that out. But it's definitely getting to where it has that potential. So that's neat. Now, more to where it doesn't do well as a language overall. Text processing is a really big one. The 12 different string types Ida has gets ridiculously confusing. Furthermore, they've been sort of unwilling, sort of not, to get with the times. The string types, they represent various UCS encodings, not UTF encodings. And that winds up being a little bit of a problem. Now, some runtimes marshal that, some runtimes don't. Some runtimes keep it as UTF, but expose everything as if it was sort of like UCS, but more really like just the Unipode scalars, which is actually ideal. That's what you want. But it's very difficult to find runtimes that actually do that. And they tend to be non-conventional. That winds up creating all sorts of problems, though, because you're not working with the encoding that you expect to be using. And if your runtime does not marshal that, you have to convert that every time you send that off to another component. Everything that you've written is an Ida, then it winds up being fine. But considering that you can create files that are oddly formatted and then other things don't know how to read them properly, because Ida has weird file formatting rules saying, including the use of these page breaks and just other conventions that it doesn't, other stuff doesn't normally do, but that Ida mandates. So it winds up being really weird in that regards. Now, while arguably a little bit of a problem, because Ida doesn't have a large amount of text processing libraries, or text processing functions and other support types and whatnot in the standard library, that can easily be addressed by third party libraries. I am not a big stinkler for everything being in the standard library. I don't think that actually needs to be the case. And in fact, I tend to support more the idea of incredibly small standard libraries that only have what they need with everything else being provided by first party or third party additional libraries. Another area where Ida really badly falls short though, and this has to do with a specific language feature that it sort of has, but not really, is responsive application development. And that's very important for GUIs, but there are other uses as well. The issue here is that Ida's concurrency model presumes a very specific, very complex concurrency model, CSP. And it is nice in that they have a nice formal language designed for it. But if you want just an event, it winds up being rather difficult to create just the event. Multicasting or broadcasting the event, again, winds up being a rather complex situation when it really shouldn't be. Events are incredibly important for the style of programming. So when you lack of them, it greatly makes it much more complex. So usually what people wind up having to do is wind up with a single callback instead of an event. Now that does sort of work. And if you are specifically constrained to only have one handler for an event ever, then you really want to just go with the callback route that that's far simpler on everything. And the code execution is going to be faster. But if you are multicasting or broadcasting, you really need event handlers, proper event handlers. And getting that kind of effect through CSP is just ridiculously clunky. Another thing Aida does, well, now we can hold off on that. So on the flip side of things, really shouldn't be all that surprising, but I think C-Sharp does text processing really well. That's kind of the reason why I switch between those two, why those two are my major languages used. It would make sense that I want the other language in my toolkit to fill the gaps that the other one doesn't have, right? So if Aida is good for embedded programming, but bad for text programming, bad for GUIs, then it should make sense that I would want a language that is good for text programming and good for responsive programming like GUIs. C-Sharp, I think, is fantastic for both of those. Now, we're not going to need to talk very much about why I think this is the case, because when I've already explained the other side of things, it should be pretty obvious that, well, C-Sharp doesn't do that. So for example, C-Sharp's concurrency model is based on multiple primitives that you compose together. One such primitive is delegates and events, okay? And you can create CSP-like behavior out of C-Sharp by combining certain primitives together, namely the task type and either pipes or events, depending on how you want them to communicate. One approach will actually give you something closer to the actor model, whereas the other approach will give you something much closer to CSP. But you can compose these. They're primitives that you put together. That winds up being hugely useful. And remember, CSP tasks still have to be actual objects that you describe in everything. They're not just what looks like a simple field that you work with. So you composing things is not all that different. You're still defining a new type, a new task that just happens to have events associated with it, where when it's doing the work inside of it, it's throwing off those events. And, well, you now have your communication between your processes. Hopefully I got the angle right. It's a little harder to get it when it's a front-facing camera and you're no longer facing it towards yourself. But he definitely used to be somebody's cat, but it was a stray that had found one night, clearly been on his own for a while now, and got real skinny. His hair super patchy from a combination of malnourishment and flea rash. And he's been healing up well. He's super friendly. It's adorable. So the other side I need to talk about is how C-Sharp does text processing really well. And I think the big reason for that is that C-Sharp does something very similar to Ida. That's the right decision. Its strings are a fixed length. Now there is an allocation thing that I'd slightly disagree with and that I think Ida and Rust get it a little more right, but we can talk about that in a later video. But the strings are not changeable. Once you have a string defined, if you change the string itself, you're defining it as the definition becomes a point or two a different string entirely. The string does not change. That is important, very important for performance reasons. And I mean the overwhelming majority of data out there is textual. It's important, very important to have high performance text processing. But that's not the only thing I think it does right. There's things I think it does right that Ida doesn't do. So there's only one string type end of story. That's significant. Now historically, it made the right call. More modern, it doesn't, but can easily be adapted to do the right thing. Very easily. And in large part, the libraries that I've been writing do that. String in any .NET language is a UTF-16 little endian encoded string. You have a single encoding type. It gets marshalled as necessary, but it is a single encoding type. That means you are always working with the exact same thing. And because it is a UTF encoding, you can represent the entirety of all Unicode scalar values in it. Now, what I mean by historically it got things right is at the time that this was originally implemented, they were using UCS-2. And come time for UTFs, it got changed to UTF-16 and the various functions and whatnot were updated to handle this. Ida didn't do that. So that's actually a pretty significant thing. But it does mean that you can now represent anything in that and you don't need yet another string type. Now, because the string is essentially just an array of character, and character is the individual components of that. It's a tautology, I believe, but I'm circular definition, but there's a reason why I'm doing that. If string is a series of UTF-16 little endian code units, then that means car being what makes up the string is a UTF-16 little endian code unit. That makes it a little awkward when working above the basic multilingual plane. And I understand why, given historical reasons, that they couldn't just update this to work with the scalar values. What I mean by how they had dealt with this in a satisfactory way is that the Rune APIs are... they feel very much like working with the car type. The few instances where it is different, it is justifiably different. With only one area where I really disagree with them, but it's such a minor thing and it's such rarely used that it doesn't even matter and it's totally fine. It's very easy to adapt your code to use Rune, which technically that's another caretype, but not really. It's not like in Ida where the closest analog to car, the wide character, represents all the Unicode scalar values that can be represented with 16 bits, whereas the wide, wide care represents all the Unicode scalar values that can be represented with 32 bits. It's not like that. The .NET car represents a UTF-16 code unit, whereas the Rune represents a Unicode scalar value. They're semantically different, and there are situations in which you would actually be interested in how the string is encoded. So I do appreciate that they're both there. Hell, there are situations in which you can actually optimize things a little bit by staying working with car because you know there's going to be no surrogate hubs. So that's convenient. Rare situations and you've got to be very careful doing that, but you can. There are situations where you can. Now, that largely covers my reasons between each of them. Like I had said, I think each of them does different things well, and using the .NET runtime on an embedded device is ridiculously crafty. The requirements are through the fricking roof on processing and memory requirements, and it's super hard to find runtimes now, and the way the .NET runtime works, it's so fricking huge that it... have fun implementing that entire thing. Despite that it's being big, it's quite a bit smaller, and you can easily get away with not supporting the entire standard library. The runtime itself is actually quite small, owing in large part because there's no virtual machine to support. Each of them does different things well. Each of them has their own purpose. Now, I'm trying to keep this one from getting super long, and luckily we're at a good time right now, so I am going to talk a little bit about the other three. F-sharp, seed seven, and UNICON. F-sharp, there's not a whole lot I have to say about it. We'll talk about some specific examples later on when we get to them. One of the big examples is generics and how that is handled, and that it really shows in a fantastic way that it's possible to write generic code that operates on what is essentially an interface, but not actually an interface. So you can write code that works with a broad number of types, just like if they all inherited the same interface, but doesn't require them to have an interface definition. And that's significant. It doesn't seem like much, but there's huge implications for that that are incredibly useful. Similarly, F-sharp does an absolutely fantastic job of showing how different yet compatible you can be with the existing .NET ecosystem that is C-sharp and Visual Basic, the entire libraries that are written in them, to where you can often write code in F-sharp and call it from C-sharp, or conversely write code in C-sharp and call it from F-sharp and easily do bindings between them if necessary. You can mingle quite a lot, and that's significant for something else I want to talk about. Overall, though, F-sharp's not hugely impactful in my ideals. I use it. I'm glad it exists, but there's not a huge amount to say about it. It's far more niche than the other two for me. Seed 7 is particularly interesting in that it's primarily more of like a research language. It was designed with, well, it's forefather, master, was specifically written as part of Thomas Mertes master's thesis, I believe, hence the name. But Seed 7 sort of took a lot of the ideas, a lot of the lessons learned and refined it, but it's still primarily a research language. It's a bit more usable, though. It's a very minor thing to talk about, but absolutely insanely impactful in that it completely changes how you work with things. The language is definable in itself. The syntax for the language is definable in itself. You can define new syntactic constructs, new, you know, control constructs like your language doesn't support parallel loops, but you can add them in. That is extremely significant, and there's tons of implications there despite being such a minor thing. It's a big deal. So come time, future videos where it's relevant, I'll talk and show some examples of that, but I've prototyped how that actually works using a little bit different approach than Mertes took, but it works. I have a working method for implementing it, and it's pretty awesome. Yeah, the other, Unicon, does something called goal direction, which considering, especially in more recent times, exceptions are getting a bit of a bad rap and that they're useful for when the system entirely needs to abort and you want to report why, but as far as exception handling goes, you generally shouldn't because it's way too heavyweight and that there are other better ways of addressing that. Goal direction I feel was a little too early in what it did because it came out at a time where exceptions were the big thing that everybody should be using and it was saying that no, we shouldn't, that there's a different way to go about it. In goal directed systems, you can essentially view it as there are two values that are returned from every single function. And in a procedure where it would not have a defined return type, there's still one return. That common return between all of them is the error state. And if you're familiar with the C libraries, the CC standard libraries, not C++, but C, this kind of sounds familiar to the setError system that it has, you don't see that used very often. In fact, even major C programmers wind up returning an error code through, you know, like you'll output the actual result, but you'll return an error code where zero is returned successfully but anything above that is a specific error that has some special meaning. It's more similar to the setError header. Just it does all of it automatically. And I'll put in a few snippets of some code that can be written in goal direction. And you can see even just from a syntactic standpoint, it's very convenient. There's some level of this that I have implemented in my library is particularly stringier, but it's nowhere near as effective as an actual goal-directed language could potentially support. And I want that because it's a big deal and it's way, way more efficient than throwing errors for every little issue and then handling them. And it handles the overwhelming majority of cases where exceptions would be used. The only time you need exceptions after that is, like I said, when you actually need to abort the entire execution and need to report why the program had to crash. As far as anything that can be handled, goal directions actually got it covered. So that's it for this. Future videos are going to cover specific features that each of them has primarily focused on C-Sharp and Eda, but I'll talk a little bit about others and why I think they do things well. Until then, have a good one, guys.