 Welcome, my name is Patrick Edgington and this series of videos will give you an introduction into how computers are designed and structured. We're going to look at how the simple pieces of a computer are organized to provide the behavior that allows us to solve complex problems. These videos are designed to cover the material from a standard junior level university course from computer science and computer engineering that covers computer architecture, computer organization, and assembly language programming. We can think of a computer at several levels of abstraction. Each layer builds upon the layer below it and usually doesn't interact with the deeper layers. This means that at each level, we only need to know a little bit about how the computer works to produce useful solutions to our problems. However, having a greater understanding of the entire structure allows computer programmers and engineers to produce faster and more efficient solutions to the problems they have faced. It's also just really cool to see how the whole thing works. On the software side, most students get to this point in their education after having written programs in a couple different programming languages. Hardware focused students will have also learned how we can build the individual transistors that make up a computer. One of the primary goals of this course is to show how these disparate topics interact and complement each other. So we will expect that you're familiar with programming and computer in a high-level language, and there are lots of high-level languages that you may have used. The category includes popular compiled languages such as C++ or Java, scripting languages like Python or JavaScript, as well as problem-focused languages such as Lisp or SQL. For the purposes of this course, we won't focus on any particular language, but we will see how the code from procedural languages like C actually runs on hardware. We will also extend this to support some of the features of object-oriented languages at the end. On the hardware side, we'll assume you know you can build digital logic gates from transistors. We're not going to worry about how those logic gates are implemented. Instead, we'll start with the abstraction of logic gates and show how we can build functional hardware using those simple pieces. We will be using four basic logic gates and or XOR and XOT as the basis of the computer that we'll build in this course. The goal of this course is to take that knowledge about hardware and software and show how they connect. So we're going to be filling out these intermediate layers of abstraction to see how each one works, how it's built from the underlying layer, and how it contributes to the overlying layer. These videos begin with some required background knowledge and some general topics that are related to multiple areas. Power estimation will give you an idea how pieces of hardware consume electricity. Execution time, performance, parallelization, and Amdahl's Law combine to give you some really great insight into how to optimize programs and hardware to get the most performance out of them with the least amount of effort. We also need a solid grasp of number systems in order to understand how we can build a computer to do arithmetic. These videos will focus on the MIPS architecture and its corresponding assembly language. MIPS is a popular architecture for undergraduate courses because it's simple enough to cover in one semester and yet still contains all the parts required for a modern computer. The MIPS architecture is still actively maintained and produced, though it's generally overshadowed by the x86 and ARM architectures. We'll begin with simple arithmetic instructions and show how to read input from and print results to the console. From there, we'll start building all of the structures that we're familiar with, if statements, loops, functions, and then we'll move on to memory. We'll get to see how to allocate, deallocate, and access memory on both the stack and the heap. We'll also see how we can build complex data structures, including multi-dimensional arrays. Finally, we'll conclude by looking at how floating point instructions are implemented in the MIPS architecture, and we'll see how we can even implement polymorphism at the assembly language level. Building up from the bottom, we'll review logic gates and the standard logical equivalences. From there, we'll begin by building a one-bit multiplexer, then extending it to 32 bits. Once we have a multiplexer, we'll use that as part of the arithmetic logic unit that we build. The bulk of the ALU will be simple logic gates, but we'll use the multiplexer to select the information that we're interested in. We'll also look at the basic memory structures of latches and flip flops. We'll see how we can assemble logic gates to build a structure that can hold information and allow us to change or retrieve it on demand. The process will also get to see how the clock cycle affects this memory access. Finally, we'll see how we can assemble some of these parts to perform the more complex arithmetic of multiplication and division. To look at the MIPS architecture, we'll start by building data paths that can implement one assembly language instruction each. We'll build a few of these and then, noting the commonalities between them, we can assemble them into a single implementation. Of course, we'll also want to control what the combined hardware does. So we'll add a number of signals to control how information flows through our hardware and control which paces we use and keep. While this will produce a working computer, it won't be terribly efficient. Pipelining will give us a way to improve our performance by running multiple instructions concurrently. This will, of course, produce some problems when one instruction requires the result of a previous instruction. But we'll see how we can mitigate or remove those problems. Finally, we'll look at some complex modern architectures. Tomasulu's algorithm is a simple, popular way to run multiple instructions simultaneously that extends the performance increase we got with pipelining. We'll also look at how our processor can interact with their outside world through a bus and see the general memory hierarchy. Put together, all of these will show you how we get complex computer programs to run on the simple transistors that we physically build.