 Before we look at another practical example of Go code, we're going to cover almost all the remaining language features, except for Go routines and channels we'll get to later because they involve concurrency and that's a whole big topic unto itself. So first off, I did mention that Go has a C style syntax, meaning that its syntax follows the basic pattern of the C language, like a lot of other languages, Java JavaScript, several other popular mainstream languages today have a C like syntax to imitate the basic syntax rules of C. And then C, the rule is that most kinds of statements have to end in semicolons. Well, that's actually the case in Go as well, like say this bar statement here does have to have a semicolon, this assignment statement has to have a semicolon, this return statement has to have a semicolon. And then you also can have a semicolon at the end of your for range loop. I don't know if it's required, but it is allowed. And same here at the end of your function, I don't know if there's a semicolon required there, but it is allowed. But the reason we don't normally write these in Go is because as a convenience, the compiler will actually insert them in certain places according to a simple set of rules. And that rule is that at the end of any line that ends with an identifier, which is a name you define in code, like a total here, or some the name of the function, or built in type names like int and bool, those are considered identifiers, or a line ending with a literal, which is like a number literal, like the number zero here, or like a string literal in double quote marks, those are literals, or lines ending with reserved words break continue fall through return, or the operators plus plus or minus minus, and lines ending with the symbols and Perrin and curly brace, a line ending with any one of these things. If there's not a semicolon there, the compiler will insert one implicitly. So in fact, it goes actually a what we would call a free form language because it doesn't really dictate how you format your code. If we make our semicolons explicit, like we can do things like this, you could put this statement immediately after the far statement, and they can both be on the same line because the semicolon is denoting the end of this far statement. Then another statement can be on the same line. You can do this. I don't recommend it. It's not normal go style, but it is legal. So in another video, I discuss how numbers are represented as bits. And as I discussed, when you have integers or floating point numbers, the range of possible values you can represent depends on how many bits you use to represent the number. So if you have a 64 bit integer, then then you can represent more values than if you had like a 16 bit integer, and likewise with floats. And so go will give you options. It is not just an int type and a float type. There's in fact, there's no type called float. There's either float 32 or float 64 where float 32 is 32 bits and float 64 is 64 bits and 64 bits, of course, gives you a larger range of possible values. So you may sometimes decide that you don't need the full 64 bits and you're trying to save some memory. So instead, it'll just go for float 32, because you know for fact that for your purposes, 32 bits is adequate. Otherwise, I would say just default to float 64, considered sort of your default choice on a modern system, you'll generally be fine. And then for integers, we have a lot of options. We have eight bit, 16 bit, 32 bit and 64 bit. And also we have a choice of signed versus unsigned. So all these that begin with you, those are the unsigned variants where like a you went 16, it's 16 bits, just like an int 16, but the range is not split between positive and negative values. You effectively have a, you have twice the range of positive values, but you don't have any negative values. So that's the distinction there. And we also have these types called rune and byte, but they're really just aliases, rune is really just an alias for N32. And the reason we have this is because sometimes in code, you want to use integers to represent the individual characters of a Unicode string. And in that context, we call them runes. So we have this rune type, but it's really just an N32. They're exactly equivalent. So whether you call your N32s runes or not, it's just a stylistic thing. And same for bytes, byte is just an alias for Uint8. It's an eight bit unsigned integer. There's no distinction to the compiler. And then most perplexing of all is that the int type and the Uint type, the types we actually use most commonly, those actually, depending upon the target we're compiling for are going to be 32 bits or 64 bits. If you compile and run your program on a 32 bit platform, your ints and Uints will be 32 bits in size. But if you compile and run your program on a 64 bit platform, your, your ints and Uints will be 64 bits in size. So this is a bit perplexing because why would you not just want to specify exactly what the size is? Well, on a 32 bit system, the way the CPU works and the memory works is that it more naturally deals with data in chunks of 32 bits. And so if you have 32 bit integers, the code generally will run more efficiently than if you had 64 bit integers. And likewise on a 64 bit system, those systems deal more naturally with integers in chunks of 64 bits of eight bytes. And so it's generally more efficient to work with 64 bit integers. The problem that arises though, is if we have these ints and Uints, which on some platforms are 32 bits on other platforms or 64 bits, what might happen is that your, your code will work correctly on one platform, but then incorrectly on another. Like say, if you assume your ints are going to be 64 bits in size and you, you create value in values that are using the full 64 bit range. Well, on a 32 bit platform, that's going to create problems because you don't have the full range there. And so you effectively have a bug, you'd have different behavior on different platforms. So the solution there is, well, maybe just compile for 32 bit and don't worry about 64 or vice versa. That's one solution. But if you want the same code to work on both systems, then well, the solution then is to when you use ints and Uints to treat them as if they are 32 bits and don't use the full 64 bit range, which sounds a bit odd because it's, it seems wasteful. Like why am I using all these extra bits if I don't need them? Well, yeah, it's using more storage. But again, as I explained on a 64 bit system, those systems will deal more naturally with 64 bit chunk integers. And so yeah, you're wasting bits that you're never using effectively, but it's still in general will make the code more efficient generally. So in practice, I would say when it comes to floats, I would just default to float 64 until you know for good reason that you want to use float 32 is like maybe you're storing a whole bunch of floats, like a big array, and you know, you know, they don't need to be 64 bit, they can all be 32 bit. So maybe in that case, you go with float 32s. When it comes to integers, well, we use bytes quite often because we're dealing with a binary data that we read from files and so forth. So you'll be using bytes quite often. Otherwise, almost always just use int. Sometimes you went in the case where we know for a fact that we don't need negative numbers, you might go for you went. And then the other ones only really come into play when specific cases where you happen to know that well, for this purpose, I'm storing a lot of data like I'm storing like a big array, say of integers. But I know they don't need to be more than 16 bits in size. So I'm going to make them all in sixteenths. So outside those niche cases, you just when in doubt, make them ints, make them bytes when you know you're dealing with bytes and all the others only arise in basically niche circumstances. In go pigeon, when we write number of rules, the number of rules that are integers like 500 here are considered to be integer values rather than floating point and values with a decimal point in them are considered to be floating points. So 4.7 here will be a floating point value, not an integer. In actual go in real go, number of rules aren't considered to have any type. They're called constants and they're not considered to be typed. So 500 for here, for example, is not specifically an int. It's not an int eight. It's not an int 16 and 32 you any of those is just a number itself. But the compiler does know that here X, which were defined to be a unit 16. Well, 500 is a valid value. And so it knows this is a valid assignment. Likewise, here for an int eight, we can assign the value negative 60 because that is a valid value in range of an int eight. And likewise, when we assign 4.7 to this float 32 variable, that again is a valid value. So the compiler says this is all OK. However, here, when we signed X, the value one million, well, X is defined to be a unit 16 and the max value for a unit 16 would be 65,535. And so obviously million is out of that range is not valid. And the compiler looks at this as, oh, that's that's invalid. That's not a valid unit 16 value. And likewise, negative 200 here for Y and int eight, the smallest negative value is negative 128. So this is again outside the range. And then if we did negative 50 times four, this is an operation on two constants. And so the compiler will compute this at compile time and it says, OK, that's negative 200. And again, it knows that negative 200 is not a valid value to sign to Y because Y is an int eight and negative 200 is outside the range. Y, of course, we can assign 4.7 because 4.7 is not an integer value. So that also is illegal. But then here when we sign Z, the value nine, well, nine, it doesn't look like a floating point number. It's an integer, but it's in the range of the float 32. So it's still valid. And also we can use engineering notation to express exponent. So this is actually 9.2 times 10 to the fourth. But this is all one number constant. And again, the compiler knows of, OK, that's valid. That's within the range of Z. But then here, the compiler knows that for a float 32, an exponent of 50 goes outside of its range. So this isn't valid. And lastly, 9.324987598, et cetera, et cetera, et cetera. This actually is not a value that can be represented fully accurately in a float 32. The significance is just too much precision. We don't have sufficient precision on float 32. But the compiler won't object to this. It'll just approximate. It'll round this into something that can be represented as a float 32. So I don't know what that would be. That's probably something like something about that much precision, I would guess, but I don't know for sure. So again, number of literals are constants and they don't have a type, but the compiler does check if they're valid for the type that is expected. So with all these different number types, very often what we need to do is convert values of one number type to other number types. And we can do that as we do here. So we have this UN 16 variable X, which has the initial value of 500 and the int 8 variable Y, which has the initial value of negative 60. And first off, what we can't do is just assign X to Y because they're different types as far as the component is concerned. Yeah, they're both numbers, they're both integers, but they're different kinds of integers, so it doesn't like this assignment. If we want to assign to Y, what we need is an int 8 value. And so we need to take X or UN 16 and make it an int 8 value. And we do so with this syntax. You you call the type name like it's a function and then you provide the value which you want to convert into an int 8. So we want to convert X. We want to get its equivalent as an int 8. We're not actually modifying X, of course. Becler X is staying as it is, but we're getting its int 8 equivalent. And the way that we have to do that is, well, an int 16 has more bits than an 8 int, it has twice as many. You of course can't fully accurately represent a 60-bit value with only 8 bits. We're going to lose information. And so what happens here is we just simply truncate the highest 8 bits. So the value 500 as bits as a UN 16 looks like this. And we're going to end up cutting off just truncating those highest 8 bits and we're left with just the lowest byte. And what that is as a int 8 as an as a signed integer value, those 8 bits are just the value negative 12. So strangely, when we converted the value 500 from a UN 16 into an int 8, we didn't get the value 500 because you can't represent 500 as an int 8. You just can't have that. It just doesn't make any sense. But we decided we wanted to do it anyway. And when we did it and what we ended up with is negative 12. Going in the other direction, if we take our int 8 value y and get its UN 16 equivalent, what happens is that the 8 bits gets padded out to 16 bits. And we do so by, well, here's the current value of y. It's this 8 bit value. Its highest bit is this one. And so we get a whole other byte here of a bunch of ones. And so this value represented as a UN 16 is 65,524. So in these two cases, when we converted from a UN 16 value to an int 8 and vice versa, the results we got admittedly are probably not very useful because the result bears no obvious relationship to the original value. There's so much distortion. It's seemingly unrelated, in fact, to the original value. So it's kind of questionable in this case of whether the conversion makes any sense. But in the same scenario, if say X started with a value, which is both an int 8 and UN 16 value as it does here 20, that's both a UN 8 and UN 16 value. Well, now when we get the int 8 equivalent of X, we're getting the value 20 just expressed as a different type. Because what happens here is that we're truncating the top 8 bits and well, here's what 20 looks like. Hack off the top 8 bits and you're left with these, which is expressed as an int 8. That's 20. It's the same value just in a different form. And now if we take the value of Y, which is 20 and convert it to UN 16, we're padding out to 16 bits, extending the highest bit, which is zero. So we just get back the original UN 16 value we had. We get back this. And again, it's still 20. So now we have a conversion between types, which seems meaningful, seems useful because we happen to know that the value is within range of the target type. And so you can do a non distorting conversion, whereas the distorting kind of conversions, I'm sure there's niche circumstances where they're useful. I'm sure I can't think of one off the top of my head, but they're generally not meaningful and useful generally. Here we have a variable X, which is a UN 8. It's a byte value with the initial value 244. And what happens when you do arithmetic with a typed value and a constant, which has no type, the result of that operation is the same type. So because we're adding a UN 8 to a constant, the result of that is a UN 8 value. So you add 244 and 6 and we get back as a UN 8 value 250. And of course, here we can just use the convenience syntax to make this more compact. And this is the same thing as adding 6 to X, signing the result to X. The question arises, though, of what happens when we do an arithmetic operation and the mathematically correct result doesn't fit in range of the result type. So here X again is a UN 8. It's a byte value. We add 6 to it. And mathematically we should get the result 256, right? But 256 is outside the range of a UN 8. The max value for UN 8 is 255. We're one above. And so what we actually get back here is the value of 0. We get overflow where it rolls from the top value, rolls back down to the bottom range and just proceeds from there. When you get 256, you actually get 0. And here when we add X and 10, instead of getting 260, we get 4. Because for every value beyond 255, we're actually rolling back to 0 and counting up from there. So we get back 4. That's called overflow. And then we also have what's called underflow. Let's say X is 0 now. We assign it to the value 0 and we subtract 1. Well, it rolls back from the minimum value back to the highest value, 255. And if we subtract 6, then we go from 0 down to 255, then 254, 253, 252, 251, and then 250. Now, this behavior may seem wrong. And of course, mathematically it is wrong. But the hardware, the way it operates is it's dealing with numbers represented with a fixed number of bits. And this is just baked into that cake. This is just a natural consequence of you're going to have overflow and underflow. But the CPU actually does detect when overflow and underflow occurs and it sets a little flag that you can test in your machine instructions. When you go, you can just do the logic of, well, I had two positive numbers. I added them together and I got something that was smaller than either of them. So you could try and detect overflow that way. There are various ways to account for this problem. But generally we don't really need to. Generally we're dealing with integers that are sufficiently sized and such that it's going to be very rare that our arithmetic is ever going to overflow the size of integer we're using. But it can happen and you need to account for it. If you don't probably account for the possibility of overflow and underflow, then yeah, you're going to have a bug. And there are cases in code where we do need to have arbitrary precision arithmetic. We need to be able to add numbers of any size or do multiple numbers of any size and get mathematically accurate results. And for that purpose and go, there is in the standard library, there's an arbitrary precision number type which we can use for such purposes. And it does all that business for us of accounting for the overflow and underflow flags that get triggered in the CPU and using as many bits as necessary to store the results so that it's a mathematically accurate results. But it turns out actually in most code deals with integer numbers that are very comfortably within the range of a 32 bit or 64 bit integer. And yeah, there are cases where overflow and underflow sneaks up on us and we didn't anticipate it, but they're relatively rare for actually most purposes. We get by even though our computers actually don't do arithmetic with full mathematical precision in certain cases.