 One of the more distinguishing features about AIDA is a very sophisticated type system that has a lot of unique things about it, like declaring new instances of even base types, something that you really don't see very often, and also subtyping, which is used to sort of do an inheritance-like thing with base types. Now, exactly why this would be useful or how to use it is not obvious if you're coming from a lot of other languages, or at least not potentially obvious. Maybe you can come up with a few examples on your own, but there are a few other things about AIDA, like attributes, that really warrant a explanation, demonstration tutorial on its type system. For this video, we're going to be doing just the basic types, so integer, floating points, modulars, and fixed points, arrays, records, tagged records, and things like that are going to be in later videos. So let's begin by showing off just the integers. In the standard package, there is a few predefined types. This is pretty similar to the intrinsic types that would be defined in other platforms, and there is a standard integer, so you don't need to constantly define your own integer, and in fact, in most instances, I would just recommend using the standard types. So assigning a value to an integer is pretty straightforward. We can just do A integer, and just assign it one, and if we want to actually make this do something, let's print that out A is... Now here we get into one of the first attributes, the image attribute, and this is used to convert any type that supports it into a string. So in a language like, say, C sharp, where you have a two string method that's defined for all instances of objects, this is similar to that. Now if we pull up a terminal, you can see that it does actually print that out. Now just to show off a few more attributes first, since they're pretty useful things to know. For those of you who are already familiar with ADI, you'll notice a little problem's about to happen, but I'm doing us just to show it off. It complains about invalid operators here, and this is because ADI's approach to types is strict enough to where concatenation can only be done between characters and strings. Given that the integer first and integer last attributes will return integers themselves, it can't be concatenated this way. So we need to actually call image on this as well. I personally do not like that this is a thing that you have to do. That being said, ADI really isn't meant for text processing, so I get it. I get it. I just wish it could be a little bit different. But now it actually compiles, and you can see that it works. Now these happen to be huge numbers, and for those familiar with the underlying stuff about the CPU, you'll notice that this indicates high probability of that being too compliment, and that it is in fact just uses the exact same integer type that's defined on the CPU. So, when you would want to use the first and last attributes, you have to do quite a bit more with when you have your own types defined. Let's show off actually defining a type, and then I can show a little bit more about when first and last are actually useful. Let's do type processing. Yeah, let's do this. So that's it, assuming I didn't make any typos. And I did not. So what this does is defines an entirely new integer type named percent. It is incompatible with any other integer type. So we can show that off by doing 50% here. So as I said, it is incompatible. Now the image attribute as well as basically all attributes are essentially like an intrinsic method as I mentioned before. So when we defined the percent type, it was automatically defined for us as well. So if we change this over to percent, this will now work. And this is even the case if we instead of just defining a new type actually inherit from the integer. Now what is it? Is it a new integer? Yes. Okay. So if we change this again back to the integer, this should fail again, and it does. Now the difference between these two with inheritance versus just this has to do with the range of the base type. At least in this instance, because it's an integer, it's defined by its range, its first and last. I'll show off this failing, and it should be a little bit more obvious. So let's do it this way. Because this is inheriting from the percent, and you can see that the percent only goes up to 100, this should effectively fail because the range is going all the way up to 500 while we're taking the base of a number that only goes up to 100. And you can see that it does fail the constraint test. If we however incorporate from the integer, this is totally acceptable, just to make that error go away. But essentially, if you leave this out, it's just an implicit new integer. If you put anything else in there, then it borrows from the base essentially. Now sometimes you don't actually want to do that, and you may actually want to keep the new type compatible with the old one. And that is done through what's called subtyping. In this instance, you must have the parent type. If we leave this out, you can see that it will complain. The subtype indication is expected, which is this whole thing. I don't like that error message, but yeah, whenever you see that, it's because you left out the parent type from the subtype. And so with these being compatible, you can either use the subtype image or even the parent image. And in both instances, these will compile and will work, unlike my typing. Subtypes can even be compatible with each other. So if we do a different subtype, you can see that this does work. Now the thing to keep in mind, however, is that the checks are still done and are done automatically. And this is sort of where it becomes useful, where all of this type system really starts to show why you'd want to use it at all. So you can see that the compiler does notice that this is going to happen. It does not cause a failed compilation because this is a totally valid statement, but the compiler does catch that E is currently 250, which is way outside of this range specified here. So even though they are compatible to an extent that they can be assigned to each other, the range check still occurs and it will, well, if we run this, it will throw the appropriate exception. So this, for the most part, shows off what I want to show. There are a few other things that do need to cover, and let's do, ah yes, the range attribute. So let's clear this off and do for, ah, what I am, integer range. Actually no, let's not do that large of a range that's going to generate a lot of text. So you can see that it printed off the subtype's range. The range attribute essentially combines the first and last attributes in a way that is recognized as a full range declaration. So in many other languages, what you'd wind up having to do is the hard-coded one-through-100. And if you ever change this definition, you'd have to also change this definition, and it can kind of be tricky to track all that down. I'm trying to come up with good real-world examples of when you'd want to use these. Unfortunately beyond some simple numerics things like percent types, ah, yeah, I can often, for a lot of things, you'd need a constrained percent where it's somewhere between one and 100. Obviously you can have greater percents. Say like compression levels are always specified as something between, not compression levels. There's a, and it sort of has to do with compression, but there's like a quality level that, I believe it's JPEG image file temp that is specified somewhere between one and, I don't know if it's zero, maybe zero wouldn't make a whole lot of sense though. And based on how high or low that is, it affects the quality of the image that's generated. This is useful for that kind of thing so that you can essentially have all your checks written out everywhere you use that type just by writing out the subtype or even a full incompatible type. More than anything, where this really shines is with hardware. Other than that, I haven't seen too much of a use for it, but it's there. It does have its uses. More than anything, what you want to get out of this would be the attributes. For the other types, that's pretty much all we're going to cover is just the actual. So if we go into modulars instead, and standard doesn't have one of these, we'll have to actually define one ourselves, but the type declaration just looks like this. So in this instance, we're defining an 8-bit modular. You could write in your own value here, like 16 or even 14. It doesn't have to be a power of two, but the exponentiation syntax kind of makes it obvious that you're doing an 8-bit modular. And I forgot that and modular image. We do, yes, because this was for doing a, it's an operation, not the request for what this is. I'm going to have to look up, I'm going to have to look this up, give me a second. I think I remember what it was. It's just that there's, there's two remarkably similar named attributes. Right, that whole thing. How do we get the result? I wonder if that returns just a generic one. Yes, okay, so that works. That's, okay. So what's going on here is that modulars are essentially unsigned integers. I really don't like calling them that because at least from my educational experience, an unsigned integer would just overflow. Whereas a modular is something like a clock arithmetic where it wraps around the way it works inside the processor typically, although it does depend on what processor architecture we're talking about, is that it wraps around. Some of these do catch the situation and throw an overflow exception treating it just like an integer, the standard signed integer. But this is sort of processor specific, actually being able to specify the behavior you want and at a rather nice for the other type of unsigned behavior, the one I described where it still overflows. Ada has what's known as natural and positive subtype of integer. Natural includes zero, whereas positive does not. So you can use the specific type of unsigned that you want. The explanation of the attributes here. First and last work the exact same way as the integers, whereas modulus returns the thing we specified here. Now two raised to the power of eight does come out to 256. We could just the same write that in here. And we get the same result, but I'll switch back to that. And what the other one I was trying to use in modulus's place essentially goes like this. Let's do something like a thousand and then it's. So what this does is calculates the what's essentially the modular operator. And I do want to be a little bit careful about that because there is actually a mod operator in Ada. These essentially do the same thing. This just implicitly uses the modulus here or the same thing. We can actually kind of synthesize this by doing this except we need some casts. So so obviously what we wrote before is immensely less writing. And that's basically why you'd want to use it. Uh I think that's everything for mod. You know let me let me try something. I actually have no idea if this works at all. We don't need this. But it's a it's a way to find out. So for. Okay so it has it. Um yeah that should essentially be everything for mods. So then let's go on to floating points since these are really the most interesting for most people nowadays. As with most other languages there is an implicitly defined floating point type. However in Ada it gets a little bit more complicated because as you'll see as we go through this there's quite a bit of ability to define very specifically the floating point type. And so this means that unlike say C or C++ where there's a single precision and double precision floating point type in Ada there are on most platforms four different oh no no no three different uh precisions some will have four and I'm sure there are some exceptions from that as well because there's a lot of different platforms that there have added targets for. But essentially it looks like and I may forget this one. Okay that worked so let's do a put line and float first. Obviously massive massive numbers and potentially we can go even larger depending on whether it can find one that is well it it it will I I already know offhand it will. It is a ridiculously large number and if you really know your floating point types you should recognize that that is in fact the 80-bit float that is available on the x86 and whatever you want to call the 64-bit version of it. So that is an absolutely massive floating point type considerably larger than what's available on a lot of other systems. And I have noticed not so much with C and C++ but I have seen a few instances of languages that their double precision float is the 64-bit even on platforms that have an 80-bit float available to them. So being able to use this is quite nice actually. And just to show off yeah you can actually tell so the exponents on these are in fact the same but if you look especially at the at some of the smaller digits you can see that these do not actually have the same range that we lost a little bit by excluding one digit and a little bit more and this sort of works the other way around although yeah that's that's still pretty large all things considered although the precision there is terrible and since it didn't show that off for floats there is something called a small that is the smallest change that can happen between each representable float so let's show that off as well now as you would expect for a floating point with a precision of one digit it's going to be very very bad there is not actually a lot of representable floats there let's try something again because I don't know if this is something we can do I don't think so yeah no uh because the range is inevitably a uh the range needs to be a discrete type but there's something ah it's it's not important it's not important so one other thing to show about show for this before I think it's digits anyways but before we start to compare a few uh floating precision oh yes uh digits returns an integer not a float so we need to do that instead so as you can see we went up immensely in the precision that this was able to represent and go even further but that's not really that interesting so the other thing I want to show off the last part about this actually is that like with the other types you can specify a range to this as well so we can do things like uh there so in this instance it's still using the incredibly precise small that an 18 digit float can represent but it's ensuring that the float is always between these two values so essentially we have the percent all over again but with an incredible amount of precision one other thing I should mention uh just before I forget is that this like anything else does support subtypes um the one thing you want to be aware of however is that with a subtype you can only declare a new range you cannot declare a new digit precision the only way to have a float of a different digit precision is to declare an entirely new floating point type which of course makes them incompatible and there are conversions between them uh but subtypes with their automatic conversion they're not really converted that's kind of the thing um can only be uh have a different range well for fixed types we have two of them to show off actually uh let's do this way actually I do forget things like that sometimes so uh let's specify a range of uh negative 100 well I wish I really wish I would allow implicit conversions between certain same things like an integer being implicitly converted to a fixed point type makes perfect sense uh floats as well an integer to a float makes perfect sense uh there's number of implicit conversions that should not exist uh that should only be allowed to be explicit but having to add nv.0 for something like that is just stupid in my opinion it's obvious I think to everybody that you're not losing any precision by just having the two but uh yeah you've you've got to do this I forget all the time this is dumb I think now the delta is very similar to the small that exists for floating point types it's essentially you specifying what the smallest uh interval between each representable uh fixed point is at least for the ordinary fixed point now that is ordinary as regards to the computer so this is a base two fixed point type the other type is base 10 I'll get into that a little bit after we cover the ordinary a bit more um but just know that it is ordinary from the perspective of the computer not you so delta works well it the delta attribute is essentially the exact same thing as the small so let's just get that out uh ordinary the right uh oh I know why I know why okay so uh you probably don't want to represent the delta that way uh there's another way we can do it should give a different result here yes okay but either way ultimately works I just I don't know for whatever reason I like prefer being really explicit with the delta even though I don't have the range but that's just sort of a style thing either either way works the exact same so do whatever yeah is you you prefer one thing I do want to point out about ordinary fixed point types in modern systems is that you don't want to use these seriously uh older hardware it made sense given that the floating point types were generally slower and pretty computationally intensive which means a lot more power consumed a lot more heat generated stuff like that um with modern systems the floating point units have gotten so good to where you really aren't going to see speaking just use the floating point type but this is for ordinary fixed point types there is another one that is actually still extremely useful what this is is a base 10 fixed point type as I said before that might not seem that special and in fact for many implications you can completely ignore that this exists but if you work in finance at all you probably have at least heard some programmers explain why cobalt is still used a lot in finance and it has to do a lot with this decimal type that cobalt is able to define and it borrows a lot from the from from that uh not so much cobalt in other regards but the um cobalt's decimal type support is really really good for finance the added precision is very helpful for example if we need to go at a higher delta than just sense for example right down to the mill I think that's mill we can do that and it might be tempting to think that the floating point the way floating point arithmetic works is the same and that everything that that everything would just work out to the same value there is something called floating point rounding error as well as some other stuff that basically within the fractional part floating point types are not able to accurately represent all of the uh decimal fractions and so you get these little slight off bits this isn't a huge issue for say video game it's it's a problem for finance so this is again very important for finance I can't really give you any great examples it's just something that if you've done financial programming at all you are going to recognize immediately why this is a wonderful thing if you haven't you're going to have no no clue why this would even be useful yeah this this covers all of the basic types as I had said other stuff are going to get their own videos so anything like arrays or records and anything that wasn't featured here is going to be in its own video I wanted to keep this pretty simple just because it's covering simple types I don't want to really just hammer on the details there are a few attributes that I have not covered here but you should this should be enough to get you started for the most part the attributes I left out are things that you really only see in specific instances anyways these provide anything that you would expect being defined in usually predefined methods in other languages so yeah I don't know when I'll be able to do the second part of this I'm not sure whether I want to cover arrays or records first but in the meantime if you found this video helpful please give it a thumbs up that actually helps quite a bit also don't be afraid to comment down there if you have any questions or I don't know thank me call me an idiot whatever the feedback is nice some of it's not always constructive but feedback's nice and yeah until I get the new video out have a good one