 Now, let's review all the qualities of CPUs which concern programmers. The biggest concern, obviously, is the ISA. If we're going to program a CPU, we just can't start until we know what instructions the CPU accepts and what are the registers in the processor. The size of a byte on a system is also obviously important, though, as I've previously mentioned, really basically all modern systems use 8 bits per byte, so it's not really an issue anymore. Another concern is what's called the word size. A word is a unit of bits, which is the so-called natural size for the processor, meaning it's the size which the processor deals with most efficiently. Usually word size corresponds to the typical size of a general purpose register in a processor. So for example, on a processor with 32-bit general purpose registers, usually the word size is 32 bits, and so when we copy data in and out of memory, we're usually doing so in chunks of 4 bytes, and therefore it's usually the most efficient size of data for that processor to deal with, hence it's the word size. Also important is the address size, that is, when we specify an address, how many bits do we use? On a system that uses 16 bits per address, well that only gives us 65,536 possible addresses, but on a system with 32-bit addresses, well that's 4 billion something addresses, which is a lot, lot more, of course. 4 gigabytes, though, isn't really that big anymore, and so we've moved to systems with more than 32 bits per address. You might assume that the 64-bit x86 processors have 64 bits per addresses, but in fact they usually have just 48, because well, 2 to the 48th is still much, much bigger than 2 to the 32nd, and 2 to the 64th is just overkill. With 48 bits, we effectively get 280,000 gigabytes, which should serve us for the foreseeable future. You may be wondering then why the modern 64-bit processors are called 64-bit processors, and what, in fact, designates a processor as 8 bits, or 32 bits, or 64 bits. Well, in truth, the notion of the bitness of a processor is really kind of a nebulous red herring concept. When we talk about, say, a 32-bit processor, that generally just implies that the processor has 32-bit general registers, and also it uses 32-bit addresses. There's no law, however, that says that the address size in a processor and the register sizes have to be exactly the same. In the Intel 64-bit processors, for example, as I just mentioned, they have 48-bit addresses, and the registers are 64-bits in size, so there's a discrepancy there. In truth, there's simply no hard and fast determinant of what officially makes a processor a n-number-of-bit machine. Another important aspect of a processor is how much cash does it have, and how fast is that cash? Again, the cash is not something we can explicitly control as programmers, but generally the more cash on the faster the cash, the faster the processor. Another important thing to be aware of is whether the system you're programming is either big Indian or little Indian, because that affects how data gets read and written into memory. A programmer also wants to know how to do I-O on the system, and so they're going to need to know if the system uses ports or if it uses memory map I-O. And finally, one last very important aspect of a system, how many processors does it have, and how many so-called cores do those processors have? It's very possible for a system to have multiple CPUs, and when it does, this means that each CPU can be working on its own code independently of the others. So effectively, you can have multiple pieces of code running in parallel, running simultaneously. Until about 2005, 2006, PCs with multiple CPUs were quite rare. Virtually every PC had just one processor, and so when on that system, the user wanted to run multiple programs at the same time. Like say, have a media player playing in the background while they use the web browser. Well, what's really going on there is that the single processor has to switch from processing the media player and your web browser, and it has to go back and forth between them many times a second, thereby providing the illusion to a human user that they are running simultaneously, when in fact, the processing of each program never happens at the same time, because you only have one CPU who can only do one sequence of instructions at a time. So the CPU is simply switching back and forth so fast, it gives the illusion to a human user that they are running at the same time. When a system has multiple CPUs, well then now you can have those two programs running on different CPUs at precisely the same moment. And as you should expect, that can greatly enhance performance. The distinction between processors and cores is that when we talk about a processor, we usually mean a CPU and it's full what's called the die package, the whole ceramic thing with pins in it that we stick into a motherboard. Starting around 2005, 2006, Intel's starting selling processors with multiple cores in them, and the idea of cores is that each core in a processor is effectively like its own CPU. The multiple cores of a processor can each at the same time execute its own piece of code. Because these cores all live on the same die package, there are some advantages and efficiency in how they can communicate with each other. And there's also many advantages just in terms of cost and producing these things, because when you have multiple separate packages, the resulting system ends up being much more elaborate and expensive. And all sorts of other downsides, like say, each separate processor package has to have another heat sink and fan on top of it, and that's just adding to the cost and complexity of system. And it's especially bad if we want to put multiple processors into some small device, like say, a smartphone, you know, you don't want to put multiple processor dies into such a small package, it's much better to have multiple cores if you want multiple processors. So today now here in 2010, it's very typical for new PCs to be sold with pretty much at minimum now two cores, four is becoming more common, and there are some systems with eight. And the expectation for the next couple of decades is this is primarily how CPUs will get faster is that they'll actually just have more and more cores. Another important question with system design is what happens when the system starts, mainly, what does the CPU do first? Where does it get its first instructions? Well, when a CPU goes from power off to power on, the usual arrangement is that the CPU is hardwired to go and execute some code that sits in the registers of an IO device, which is effectively just a storage area that retains its code, even without power. And such a device is usually called a ROM, a read-only memory. It's basically just a chip that, unlike RAM, isn't volatile, so it doesn't lose its content when the power goes off. Or actually, the device may require a small amount of power, but it gets this when the computer is turned off from a small battery that sits on the motherboard. In any case, on the PC platform, this ROM chip is called the BIOS. It's the basic input-output system. And it's just a ROM chip that most importantly contains some code, which the CPU needs to execute when the system turns on. And this code is usually very small and simple, and all it does is it directs the CPU to go and look at a particular hard drive and look for a particular special partition of that hard drive called the master boot record. And the master boot record contains some code, which the CPU then executes, which then gets into the whole process of launching an operating system. So when a system starts, everything proceeds from what's called this boot firmware sitting in the special chip in the BIOS. The ultimate aim, though, of course, is to run an operating system. And it's through the operating system which we launch our other programs.