 I know I haven't done a presentation in a while, but I actually have quite a few of these made up. I've been sort of putting it off because I stopped using Microsoft Office and was like, oh, I need to reformat these, but it's fine. The content at Stealth is still presented fine. It's just that the background isn't quite. For whatever reason, Labor Office doesn't like the background and won't render it perfectly fine. But it's not a big deal. So this time we're going to be covering basic and complex types. This is a very general programmer-oriented video. It's not about Ida specifically, but it's just very in general for anybody looking to get into programming. First things first, obviously cover the basic types for the complex types. The most fundamental type is the bit, and this is either a zero or a one. This is by convention. You can also refer to them as like on or off or true-false. It's just a state, a binary state. Again, zero or one is just convention. That's just how it got adopted. And on a computer, fundamentally everything is this in one way or another. They'll use encodings or other things like grouping together numerous bits to refer to a larger value, but everything is fundamentally a bit. Then we have a byte, which is the most common grouping of bits. This is eight bits together and can hold. That's not correct. That's a typo. This is not zero to eight. Eight bits is 255 if you're starting from zero. It holds a total of 256 values, so that was a typo. I'm sorry about that. Typically speaking, two of these are coupled together so that they can be represented in hexadecimal as one digit, and it's very common to group four together, which in hexadecimal would be two digits. This is also on most processors referred to as a word. That slide had numerous errors. If you didn't hear me discuss something specific, it's because there were errors and I didn't catch that when I reviewed this just earlier. I don't know how, but mine has an interesting way of correcting things when you know the right answer. Engager is a grouping of bytes, which represents a whole number. These are numbers as we normally think of them. I'll discuss a few things before that. The integer has a varying size in how many bytes it consumes. This is typically a word, whatever the word would be on your architecture. But there are numerous other types. C-sharp has int8, int16, int32, and int64, which refers to the exact bits that it uses up to represent that. IDA has short, short integer, short integer, long integer, and long, long integer. But they all fundamentally work the same. The integer is a collection of bytes that specifically represent an integer, a whole number as we normally think of them. This again depends on the exact platform that you're on. The range will be from either negative x to x, or something like negative x to x-1, so that the positive range is just one shorter than the negative range. I have seen one platform where it was negative x-1 to x. The one you're mostly going to see is going to be negative x to x-1, but there are a few different ways the range can skew slightly, and it depends on the underlying platform. A good compiler will always catch these types of things for you anyways. The last value, adding one to that, will cause an integer overflow, because the computer can't hold any more than that, and so it's an issue. But also, the very last value, subtracting one from that, will cause an integer underflow, which is the same thing essentially. They're both out-of-balance errors. So this means that the integer can really only hold a finite space, which should make sense. That's like having only a specific amount of space on a sheet of paper. You can only write a number up to a certain size. Now we have the modular. You encountered this, whether you've realized it or not. Unfortunately, math classes tend to be very lacking in ever providing a description of this, or how it's fundamentally different from an integer. So again, a group of bytes, which represents a whole number. Superficially, these are remarkably the same, and are in fact represented the same way. Superficially. But the range is always going to be from zero to whatever the mod is. Actually, one under whatever the mod is. So if you have a modular of, say, 12, like the clock, you can get all the way up to the 11th hour, but the moment you get to the 12th hour, it fundamentally does reset back to the 0th hour. Some people will skew it slightly and start it from 1 and then go all the way up to 12, but the moment you go from 12 up another hour, it's not the 13th hour, it's the first hour again. So regardless, that wraparound behavior is a modular. Degrees work the same way. 360 degrees is, well, any circular angle, actually. But if you're using degrees, 360 degrees is just zero degrees. You don't go from, say, 359 degrees plus two. It's one degree, it's not 361 degrees. So this makes it definitely not the same as a positive number. A positive number is an integer constrained from one up to whatever the integer last is. A modular wraps around instead of going out of bounds. So this means that the very last value plus one equals zero, which was the very first value, and that zero minus one is going to equal the very last value. Again, this is exactly like a clock or any circular angle. So an enumeration isn't a very critical type. There's ways to work around it, but it is a very useful type. So it's a grouping of bytes which represents a named number. The underlying implementation is that an enumeration is just a number. That's why they can be excluded from a language and you can still program just fine with it. But that naming is hugely beneficial. Whenever you have something that really should go by a name, like a choice, that should be done with an enumeration in rare cases with a string, but overwhelmingly with an enumeration so that it has a name which is very friendly to the programmer, but just a number which is very friendly to the computer. So the focus as far as the programmer goes is really just the name. Just treat it as an identifying name. These tend to be a bit special purpose, things like traffic lights. You wouldn't want to do it through the first light, the second light, the third light. You'd want to just do green light, yellow light, red light. Maybe another, for the instance of a traffic light, maybe another property which is a Boolean on whether that light that's active is also flashing, but the exact light that would be lit up would generally be implemented as an enumeration. So the character. This isn't quite the same thing that comes to most people's mind if they're not familiar with programming. This is a group of bytes which represents a glyph or control code. This is a written symbol. It is a type of enumeration. And it's used to refer to things like letters, the glyphs of numbers, or the representation of two, not how the two integer or two modular would be defined. Spaces. There are numerous different types of spaces and this is still fundamentally a character. And control codes which do things like tabs, new lines, carriage returns, ring the bell on a terminal, things like that. They're implemented as characters because nothing else really made sense, but they're not anything in normal writing that we would think of. So then we have the Boolean which is a grouping of bytes which represents a Boolean logic value. For those not familiar, Boolean logic is just true or false. And it technically could be a single bit, but for performance reasons it's not typically represented that way. It'll be represented using whatever the fastest read for that processor is, and that's often the word, although in some cases it will be 8 bits. It will be done with 8 bits. But in a language that allows you to pack values down like you can do this in Ida, I'm sure there's others as well. It is possible to represent a Boolean value in a single bit. Just be aware that even though that's space-efficient, the processor is not really optimized to read that way, so you'll result in slower reads. And there are other forms of logic. Keens, Lekasiewicz or however you say the name, I'm not great with Polish, and then there's tons of other ones. What fuzzy logic from... What was his name? It's an Aizari mathematician that originally came up with fuzzy logic, and there's plenty of other logic systems, but Boolean is the one that you will always see with computers, and because it's essentially just a representation of the bit, you use Boolean logic for control statements like if, this, then that, looping conditions and similar. You'll see a lot of Booleans. So we have fixed point numerics. These are essentially an extension of the integer to a group of bytes, which represents a rational number. Some programming languages will mistakenly say real number. A computer can never represent a real number because space reasons. But there are both binary and decimal fixed point types. It's sort of an advanced topic, but binary cannot always be perfectly converted to decimal and vice versa. There are these weird rounding errors that occur when you convert between the two, so both of these exist. Fixed point is really good for accuracy and speed reasons. As long as the number you want to hold is within the range of the fixed point type that you're working with, it will continuously accurately represent that. You won't get these odd truncation errors or anything like that. And it is good for speed because this is typically done using the normal arithmetic logic unit and not the floating point unit. Now on modern computers, the floating point unit is just as fast. So this point is largely trivialized nowadays, but the fixed point does still have its uses. In monetary systems especially, you will almost always see decimal fixed point types for doing all of the money calculations. Sorry, I went through that a little bit too fast. The last item was just that a fixed point can overflow the exact same way as an integer can. Literally just think of a fixed point unit as two integers grouped together where one of the integers really refers to the fractional part. So anything that can apply to an integer can apply to a fixed point just like overflow errors. But then we have the floating point. This is basically scientific notation or engineering notation for those who are familiar with that. It's a grouping of bytes which represents a rational number with an exponent. That exponent is important. So there are also both binary and decimal floating point types. Most languages will not allow you to work with decimal floating point types. But there are some that do, and so these are a thing that exist. And again, it's basically scientific notation. They're good for very small or very large numbers. And this is because of the exponent. They lose some precision because the fractional part can have these truncation errors and other things. I'll do a more detailed video on floating point units eventually. But floating points, whenever you can, are what you want to do when doing very small scale, like micron level stuff or very large scale planetary level stuff for regularly working with massive units like, I don't know, thousands of tons or such stuff like that. And this is the thing a lot of people aren't aware of, and some people might even get defensive about, but they're not as accurate. Again, I will get into a bit more detailed video on this later. It doesn't belong here, but these are not as accurate as other types. There are precision errors that floating points experience. And these can overflow and underflow, but it takes quite a bit of effort because of the exponent. But you will eventually get to a point where the exponent cannot grow anymore and it overflows then. Same with the underflow, you continuously get to a more and more negative exponent. It will eventually underflow. So then complex and aggregate types. This isn't going to be complete because there are a ton of these, but this will cover the most common complex and aggregate types. The array has to be the most common type. It gets used all the time, and it's just a grouping of other types. The type it groups should be the same type. I say must here. There are some languages which use dynamic typing that allow a raise of any type, and then it uses some type inference to figure out which it's actually working with. But with the majority of programming languages, it must be the same type. And even in languages where they allow dynamic typing, I would strongly, strongly recommend that they be the same type in that array anyways. It'll save you some headaches. But the elements in the array are assigned a number. Now some languages will allow indexing an array based on enumeration, but if you remember, those are fundamentally to the computer still just a number. And they're very effective at grouping data. In fact, a vector like you were to recover in math class can easily be represented as an array because it's just a grouping of the points, which are all just numbers. There's plenty of other uses, and in fact the string is quite literally an array of characters, and the string being a representation of text. Now arrays are fixed in size. There are some neat ways of working around that that some languages will use and sort of hide the fact that it's really fixed size, but it changes to some trickery. But when you're working with an array, try to think of it as fixed size for performance reasons, even if you can't, even if the language does allow trickery like expanding the size of the array through some stuff, still try to think of it as fixed size. And there are two ways to actually make an array. One of them that you'll see in the overwhelming majority of languages for compatibility was C because C was the one that... I don't want to say introduce this, maybe they introduced this, but it was definitely the one that popularized the approach with null termination of the array. This turns out to be a really bad idea for quite a few reasons. It makes array overflow errors really easy to pull off, which are a pretty big source of bugs and even hacking approaches. But the other one is to actually store the bound of the array in the type and then the data in the array afterwards, and then the bound effectively constrains the array and makes overflow of an array quite a bit more difficult. I do not want to say impossible, but just quite a bit more difficult. So now we have the string, which as I mentioned is just a special type of array that is an array of characters. And we have the record. This again, you'll see a lot, although sometimes in different names it's a grouping of other types. So unlike the array where it really should be the same type throughout the entire array, the record allows numerous different types so you could have a number along with a string within the record. And that's fine. The record can handle that properly. Each of the elements in the record are assigned a name. So in that example where you have a number and then the string, maybe the number is like an index value, so you call that index, and the string would be a name, so you'd refer to that as name. So then the record would have two properties within it that are named index and name. And these are again very effective at grouping data. It is easily one of the most common complex types because of how useful this is. And it is again in most languages a very fixed layout and size. There are some languages that use approaches like prototyping that allow these to actually be modified later on, and it gets confusing. But generally speaking these are a fixed layout and size, so once they're defined they cannot be changed. They're also called a structure, a struct or structure. It's the simplest of all structured data. The record name comes from a sort of similarity with the record in a database. But they're all a record struct. They're all the same thing. And some languages do have special types of records. Eda has the variant record, the tagged record. C has unions. I'm sure there are other examples but nothing's coming to mind right now. Now we have a set. This is a grouping of values with any type. To some people this is also a type of container. I'm not going to get into that. It's sort of a hack representation of what a set should be. It's still useful in languages that don't have proper implementations of sets. These are very useful in pattern matching and mathematics for what should be really obvious reasons. Set theory is a part of mathematics. So sets are really a lot like types. In a language like Eda where you have subsets, you can use the subtypes as constrained sets. So natural, positive, negative, even, and odd. If you've seen my video on the numerics package, you know that these are all defined in there and you can actually do set tests against them. Something like if X in even then, and that's literally a test for equality, you don't have to remember exactly how to do modular division to determine an even number. You just test against that set. So this can also be used with things like character matching. Things like is this in the set of letters? Is this uppercase, lowercase, a number, a symbol, a punctuation? You can define all of those as sets as well. This is often implemented as a subtype of a basic type, although not always. There are some languages with very explicit support for sets and the definition of those is going to be a little bit different from a subtype. If the language you're working with doesn't have subtyping or a specific set declaration, then that's when you need to get into the set container and sort of work around for a missing feature. Not every language needs to support these though, so that's important to recognize as well. Now we have some dynamic data types. There's a huge amount of different dynamic data types, so this is not going to be exhaustive by any means. These are also known as containers. Oh, I forgot my own video. These are actually going to be in a different presentation, but again, it's not going to be an exhaustive presentation. It's going to be pretty limited. I hope this has been helpful. It definitely seems like my presentation videos are among the most widely viewed, and I'm sure that's because for people trying to get into this, it's far more relevant. There are far more people trying to get into programming than who are already programming and already added developers and yada, yada, yada. But if you found this video helpful, please give me a thumbs up. If you like my videos in general, subscribe. It means a lot more than people realize. Have a good one.