 To begin our discussion, even before we get into the whole world of coding, the idea is that we want to start by looking at the idea of a programming language. If we think about sort of just what I'm doing right now, I'm talking to you with words and they mean something to you, well that's no different than what we do with the computer. We sort of write dialogue to make the computer understand what's going on, because here's where things get a little funky. You see, there's people, right, and people are people. We can understand each other even when I don't really talk in complete sentences and things like that. But then we've got the computer, and the computer is, while you think it's incredibly smart, the reality is the computer is very dumb. And it only operates pretty much in those ones and zeros. And so that's where we start to get into this idea of data. You know, when we think about at least how the computer operates, right, there's only two things that the computer thinks about, zeros and ones, binary data. Pretty much the way you can think about it, just to kind of see this. This is an Arduino with a little LED light sort of attached to it. And right now, nothing's going on with it. Well, it's because there is no power. There's no electricity going into this little circuit that then feeds into this little LED. But if I come in and I take my little plug and I plug it in, it turns on and it turns off and it turns on and then it turns off and then it turns on and it turns off. And this is all this application is meant to do. But the entire idea that I want you to think about is really what's happening is this is routing electricity to here. And so when we think about our binary digits of zero and one, it's basically one is electricity, electricity, and zero is no electricity. And that's it. That's all we're doing when we think about the computer. And really what we're looking at is then this accumulation of these electricity, no electricity. And that's where we get into all of those big fancy $5 computer terms that you hear everywhere. Like a kilobyte is just a thousand, well, if we think about this for a second, each one and zero, if we get eight of those one or zeros together, we now create a byte. If I then have a thousand bytes or 8000 bits, that's where I get my kilobyte. And then if I have a thousand of those, that's how I get my megabyte. And if I have a thousand of those, that's how I get my gigabyte. And if I have a thousand of those, that's how I get my terabyte. And guess what? That's how I get one petabyte as we're now entering into the world where petabytes are beginning to exist. And there's more after that, but the point is, it all starts with a single question. Do I have electricity or not? A way to think about this, right? Just like I was showing you with the Arduino, when I operate with an LED light, really I'm asking a simple question. What are the possible states that that LED light can be operating in? Basically, if we think about that first one, it's operating on the idea, well, if there's no electricity, then it is off. The light doesn't turn on, so that's technically one state when the light is off. Well, we can consider that the zero. But what happens if I do give it electricity? Oh, well, the light is on, and therefore we can count that as a one. Well, that tells us we have two possible states. One when the light is off, one when the light is on. So what happens if I add a second LED light to this? Well, now what we're doing is we're thinking about the different states that two lights could operate within. And the first one, as we can clearly see, is both of them are off. That is a state. But what happens if one of the lights turns on? But the other one stays off. So in this case, our left light is off, our right light is on. That's technically a second state. And we could do the exact opposite. Instead, the left light is on, and the right light is off. That is not the same state. That's a different state entirely. And as you can sort of guess from the process of elimination, the last state, the fourth state, would be, in this case, when both lights are on. OK, we're working with two light bulbs. But that's where we start to think about the idea of what happens if I continue to expand this. What happens if, instead of dealing with two lights, we look at a single byte, right? A single byte, we said, was eight bits. And that's really what we're saying is, if I have eight LEDs, and they can all be on or off in some combination, well, that means that a single byte can represent 256 possible combinations. And it is through these combinations that we start to get what we think about as data. Another way to think about this is also how we start to represent this data on our computer. This sort of has more structure to it than just, it's a circuit board and electricity is going to my LED. There was a decision that had to get made to make that LED turn on. And the way we can think about this is something known as the basic machine architecture. These symbols or graphics you can see, they're really just these hardware pieces that we know and love or have seen at best by that we never touch. Memory, for example, this is an old stick, but it is still memory. And then the central brain of the computer, the processor, this is again an old processor. But this thing right here, it's the brain going on and making decisions when we're working with our computer. The way to think about it is, again, memory, memory sticks, hard drives, CD-ROMs, all of those different things. They're just storing some kind of information. Maybe, for example, in this little square, I am storing the representation of a five. We'll talk about that in a little bit, but the entire idea is, again, I'm storing this five in memory, just like in my brain. My processor is going to want to do something to that five. Maybe, for example, I have in this second slot a two. And what I want to do is I want to get the value of five times two and store it in this square. Well, the entire idea and how sort of a very basic level of how the computer is going to operate is my processor. My processor has a certain part of it that is considered what we like to call the control logic. Again, I want to make sort of the decision of five times two. All right, well, there's a lot to unpack here if we think about it. I need to first know what this first value is, and I need to know what this second value is. And then I need to know what this operation is. Each one of these sort of blocks are instructions that the computer needs to know to do. So in this case, the first one is, oh, well, I need to get the five. So my computer, my central processing unit goes up in the memory says, let me look in this memory slot that I already know of and grab the first instruction. Oh, it was five. Well, I'm going to take that and then I'm going to pass it into something known as the arithmetic logic unit, the ALU. And, you know, yeah, well, we won't find ourselves needing to know these commands here. But these are effectively the circuit level commands that our CPU is going to do with that five. And in this case, we're just going to say I want to take that five, not yet, and I want to load it into memory. So there's a five just sitting in memory right now. Well, then what? Well, then the processor is just going to go to the next step. Oh, well, it sort of repeats the cycle. Oh, I went and I put that five here, and then I'm going to look for my next step. Technically, it would be the multiplying of the two, you know, just to shorthand this a little bit. So it takes that or grabs the value of the two. And then inside the processor, it does the fancy math to produce, in this case, our 10. And if we wanted to store that 10 later, again, this is where the ALU would say, oh, take you and just put you here in memory. And so it's doing this over and over and over again with any kind of number, any number of ones and zeros. In fact, we don't even think about everything as just numbers sometimes. That's where we start to get into things like letters, right? All those ones and zeros, they get added together, they get subtracted together, they get, you know, messed around with, but those ones and zeros, instead of them representing numbers, they're also able to represent letters. And so if you take a look at this and you go look up the ASCII binary conversion online, you will see that 0100100 represents, and this is just everyone has accepted this. This represents a capital H. And 01100101, that's going to represent a lowercase e. And in fact, this is a wonderful, you know, again, if you're just looking for something to randomly do, you know, in your free time, go look up the Unicode table, Unicode-table.com. You're going to see that they're still adding in more symbols and what those binary representations mean for those symbols. So something as simple as the poop emoji has to have a binary representation. And there's people actually trying to discuss poop emojis and how they fit into binary. Either way, it is through this sort of representation of data that we now get into the idea of a programming language. And realistically, the best way I can think to describe a programming language is it's no different than how we have different types of languages for different types of countries. You know, yes, I'm using the American flag here, but it's mostly because we're using Python. And it's not saying Python's American or anything like that, but just the entire idea of it's like English. Python is our English, but there are other languages out there such as Java and, you know, maybe if we look at the idea of, you know, English to Spanish for just a second. Well, there's many things that are similar and different between the two languages. Not everything is going to be the same word. Hello, Ola, two different words, but their meaning behind each one is the same. Very similar to how, you know, oh, I need to represent the number five, right? That's not how you say five in Spanish. You say Cinco, but we still know that five means five. It means these many fingers are standing up in the air. And so it is through these different kind of ways of talking to the computer that what we do is we take that code, as we call it, and run it through some compiler or interpreter. And what that is going to do is produce those machine codes that tell the computer move something from a memory slot or do literally multiplication or any of that stuff. It's all through our programming language. And so that's where we start to get into, oh, we're just, we're learning effectively one language to do this task. You could be doing it in Java, you could be doing it in JavaScript, Haskell. There's one that literally is called Piat, and you draw pictures and that is a programming language. Either way, what this does is this is going to now allow us very similar to this idea of, you know, five is five doesn't matter what language you're referring to five is five. That actually gets us into different kinds of knowledge. And this is much more on that super philosophical kind of statement. But the idea is that we have two types, declarative knowledge and imperative knowledge. Declarative knowledge is just again, just like the slide says, statements of fact. When I say that X equals five, I'm making a declarative fact, right? Five is X, period. There is no question about it. Imperative knowledge, though, is when we do something with that X. So in our case, say, for example, the area of a circle. Oh, now we're talking about a formula. Oh, well, you know, the area of a circle is a pi R squared. So just to kind of go off of this for a second, if we made, for example, a variable or value of R and we said that R was 10, right? Again, that is a declarative statement. When we create sort of this algorithm of a is pi R squared, we're not saying 3.14 times 10 a will always be 31.4. We're not saying that we're instead saying it is simply the mathematical operation. So the formula, if you will, of how to calculate the area of a circle is this. And so we start to kind of build out these different approaches. So what we're doing in this class, we're going to obviously learn declarative knowledge. You have to learn declarative knowledge. But what we're doing is we're effectively trying to figure out how to write algorithms, how to write these different steps so that we can appropriately do the area of a circle, right? You know, area of a circle, super simple, someone in math. I don't know, a year or so ago, you know, centuries ago, figured out that algorithm. And we're just trying to figure out new algorithms all the time. So just as a quick rule of thumb, a quick thing that you can do is not think about it in exact code, but start to think about it in just pseudocode, asking yourselves, what are things that I want to do? What are what are things that I want to get out of this? Because the reality is what what we're doing is we're baking a cake, right? You know, this crazy kind of concept going on here. But baking, you know, is again, it's a science. You know, I have to measure all of my ingredients to a T. And if I don't, I get this weird concoction. But more specifically, just like with baking and just like with cooking, there is a set of steps that have to be followed in a particular order. You know, let's think again about this idea of cake. I have a step here that says put beaten eggs into a mixing bowl, right? Well, if I skip this step and I do something like put the mix, put the beaten eggs in the oven, right? Oh, well, that doesn't make a cake that makes a frittata or an omelette or whatever, you know, cloud eggs, it's basically, you know, I did my steps in the wrong order. And so again, this is this idea of making sure that we can build our own recipes. Now, the big thing about this is they are going to start very small, very basic, very primitive. But the idea is that we build on them over time. So that again, if we're thinking about it like cake, right? We'll start with just the basic, most basic, simple, just single layer, you know, no icing, no frosting, no, you know, no chocolate, none of that craziness. We'll just have a basic vanilla, you know, sheet cake, very boring. But it allows us to then begin to build as a serve as our foundation for, oh, now I can start to try and do different things. Maybe I cream the butter with the sugar and then I I'm tempering the eggs with some hot milk, right? These are new steps that we build on top of sort of our foundation. So when we start to think about sort of, again, the idea of the programming language and these primitive constructs, the first thing that we have to start to, I understand is what are the primitive constructs in Python? And it's very simple, again, it's primitive, but the idea is something like the number 10, very simple for the computer to understand binary representations for 10 are known and well, you know, you know, discussed kind of things. Then we start to get into those weird crazy things where we call them strings. Well, if we remember just from the earlier slide, letters are just a combination of binary ones and zeros each stored sequentially. So again, they're very similar to numbers in that light. We know what they can do. And then finally our simple operations. When I give you something like the asterisk or the plus sign or the minus sign, each one of those very simple. I know, you know, I'm taking the ones and zeros of this number and the ones and zeros of that number and I'm merging them together. And that is what forms Python's syntax. Now is where things get interesting because, well, we have to have valid syntax, right, ground control to major Tom. Great song by David Bowie, but we all can look at that and see those letters and see the spaces and all the characteristics of that sentence. And we understand that, yes, that is in fact valid English. But if I came in and I started using some crazy symbols that are not a part of the English library, we once again can look at that and say that is not English. That is not a valid English word. And so the same thing happens inside of Python, right? The asterisk is how Python understands what multiplication is. But if you think about just every single instance of math that you've learned when you learn multiplication, right? You learned 2x2 or you learned 3x. It did the weird thing again. But if I did 3x.4, right, that was something you learned in algebra because you couldn't use the X anymore, right? Because X meant something and, well, that's where the dot came in. But unfortunately, you know, if you look at your keyboard, right? What's the difference between a dot and a decimal place? And so, you know, we don't use that at all. We use the asterisk. That is the symbol that we've all just accepted is going to be the multiplication symbol that we work off of when we're dealing with, again, multiplication in a programming setting. Now, what we do with all of this syntax is, you know, we know the symbols that are going to be valid syntax. We then get into the idea of static semantics. And the issue there is making sure that we do or use the symbols in the right way. If I say I am Spartacus, right? We all can agree that that is a correct English sentence. It is both syntactically correct. All those words are English letters and semantically correct. I'm saying the correct verbage or the correct wording. In the right spots, if I said something like I are Spartacus, I are baboon, right? That is syntactically valid. Those are words that are English, but we know that that is not the correct way to talk. And in the same kind of vein, if I'm doing something like 1.8 divided by a string, right? Well, that's not semantics. You know, these are syntactically valid Python components. Decimals, the slash for division, cat in a string format with quotes. All of that works perfectly fine. But the problem is you can't divide cats in this world because, again, that is not semantically valid. And again, this is where when we think about the idea of programming, you know, in English or in just informal languages, English, Spanish, things like that, this statement here can be ambiguous. Python is so awesome, right? I just said it in a very kind of statement fact. There was no emotion to it, but I could have also said Python is so awesome. And because you've heard sort of my my the increase in my voice and whatnot, you can tell that was more of a truthful statement. And I really enjoy Python or again, you're the student and you don't know why you have to be learning this. You're like Python is so awesome, right? I dropped my voice and I dragged out the word awesome. So now it's a sarcastic tone. All of that can happen in English and that can happen in spoken language. In Python, though, it can't work. And in fact, there's only one representation for this statement. We have something called a variable. And the only thing that it represents is the character representation or the sequence of characters that that spell. That's it. It does not try and make an interpretation of what the word awesome means. It doesn't make an interpretation of what the word Python means. It doesn't care what Python means. It just knows, hey, that is a variable that we're going to be working off of. And it is storing a w e s o m e. And that is it end of statement.