 Some of my nerd friends ask me what my favorite mathematical operation is, and I keep on coming back to powers, times after times after times. If you're one of the many people who suffers from some sort of math anxiety, this may bring back some bad memories, but let's review some basic ass arithmetic. Multiplication. To multiply two numbers, like 63 and 42, you start by stacking the numbers on top of each other. Start at the lower rightmost digit and multiply it by each numeral of the top number, from right to left. Two times three is six. Two times six is 12. We then drop down to the line below, add a zero over here, and do the same process. Four times three is 12. That means that we have two digits in our answer, so we write down the two and carry the one. Four times six is 24, add the carried one, and we get 25. Add everything up down here, so that's 126 plus 2520, and we get our answer, 2,646. That's probably how you learned to multiply numbers too big to do in your head, but unless you had a pretty good teacher, you learned it by rote. Nobody explained what you're actually doing with this method. You just learned to follow the steps and get the answer, all while your teacher assured you that you wouldn't have a calculator on your person at all times. No wonder so many people hate math, but this algorithm isn't the only way to multiply numbers together. In the US, the common core educational standard has been criticized for political and social reasons, but also because many parents don't like that their kids are learning weird new techniques for multiplication. In the common core method, you take the number you're trying to multiply and break it into large chunks that are easy to work with. 63 becomes 60, and 3, 42 becomes 40, and 2. Draw a small table and lay the number chunks into it like this. Then, fill the result in each cell of the table. 60 times 40 is 2400. 60 times 2 is 120. 40 times 3 is 120. 2 times 3 is 6. Add a cross, and a cross, and then down. 2520 plus 126 is 2646. That's cool, but it's also illustrative of one way to think about multiplying numbers. Area. If you have a cornfield that's 63 meters by 42 meters, you can divide it up exactly how this method illustrates. 2400 meter squared plus 120 meter square plus 120 meter square plus 6 meter square. I've been using the stacked method my whole life, but I have to say that's very intuitive, albeit a little slower with all the line drawing. But those aren't the only options. There are all sorts of algorithms for multiplying numbers, many you've probably never heard of. For example, there's a method that was discovered in ancient Egypt that's pretty nifty, now referred to as peasant multiplication, which only requires doubling and halving numbers. Check this out. I'll write 42 here and 63 here. Now I'm going to move a factor of 2 from 42 over to 63. 42 divided by 2 is 21. 63 times 2 is 126. Let's do it again. 21 divided by 2 is, well, it's 10.5, but let's just round down and call it 10. 126 times 2 is 252. 10 over 2 is 5. 252 times 2 is 504. 5 over 2 is 2.2. 504 times 2 is 1008. Half of 2 is 1. And twice 1008 is 2016. Now, this bit's important. I'm going to go through and cross out each line where the number on the left hand side was even. That's 42 gone. 10 gone. 2 gone. And now add up everything on the right side that isn't crossed out. 126 plus 504 plus 2016 is, well, I'll be damned, 2646. It's nice to only deal with 2s, but that's ungodly slow for larger numbers, certainly much slower than the box method or the stack algorithm. I guess learning your multiplication tables higher than 2 and being better educated than ancient Egyptian peasants has its perks. But that raises an interesting question. What's the fastest way to multiply two numbers? Surely there's some technique that has the fewest steps possible that will get you the answer quicker than any other method. Well, mathematicians and computer scientists have been working on that problem for a while now. And a recent paper by David Harvey in George van der Hoeven has a categorical answer. But odds are you're not going to be using it to figure out how much to tip anytime soon. In computer science, big O notation is a way of expressing the theoretical limits of how long it will take a program to calculate something. Unless you code like I do, you can feed small numbers into almost any program and get answers instantly. But as those numbers get bigger and bigger, the computer has to work harder and harder to work everything out. An expression like big O of n squared means that the time the program takes to calculate an answer grows proportionally to the size of the numbers you're feeding it squared. So if you feed it numbers that are twice as big, it'll take four times as long to spit out a solution. That's actually the theoretical limit of the grade school stacked numbers algorithm we started out with. Big O of n squared, where n is the number of digits that you're working with. If you sit down and count everything out, you can prove for yourself that for each numeral you add to the stack, you're more or less squaring the number of operations you have to complete before you have an answer. Five times 66? Three operations. 55 times 66? Nine operations. That's not great when you start working with larger numbers or numbers with more decimal places. You can probably come up with a more exact formula to define precisely how many steps you'd need in each case. But big O notation assumes that the numbers being processed are going to get almost infinitely large. So the algorithm's speed will end up being dominated by whatever term is slowing it down the most. For a long time, the stacked numbers grade school algorithm was the fastest multiplication method. Mathematicians theorized that it might be optimal as good as we were ever going to get until the mid-20th century. Just a week after he'd heard that it might be impossible to multiply faster than n squared, Anatoly Karatsuba came up with a technique that was faster, big O of n to the 1.585th power. In a method that became known as the divide and conquer algorithm, Karatsuba decomposed the numbers into smaller chunks and replaced some of the multiplication steps with adding and subtracting, which are comparatively cheap from a computation perspective. For 63 times 42, it looks a little like this, which may look like a lot of work, but when you're dealing with numbers that are around a thousand digits long, this method is 17 times faster than the one that you learned in school. Karatsuba's method was incrementally improved over the next few decades, until the next big breakthrough in 1971, where two German researchers, Schoenhaage and Strassen, figured out a way to multiply even faster. Fast Fourier transformation is a reversible operation you can do to polynomials, equations like x squared plus 2x plus 3. It happens to be the case that if you do a Fourier transformation on two different polynomials, multiply your transformed quantities together, and then do an inverse transformation, the result is the same as if you just multiplied the polynomials together. I'll bet you can see where we're going with this. For very, very large numbers, Schoenhaage and Strassen figured out that you could represent them as polynomials, do a fast Fourier transformation on both, multiply the results, do an inverse Fourier transformation, and recompile the resulting polynomial into a number, all much, much faster than Karatsuba's method. I've linked the video in the description of someone doing it by hand for two two-digit numbers. And again, you're not going to be using it to figure out how many square feet your apartment is, but rather than completing in n squared time. It's down to, are you ready for this? n times log n times log log n time. It's actually the most commonly used algorithm in the world for accurately multiplying numbers with more than 33,000 digits, because some bright coders figured out a highly optimized way to implement it. But even having built this impressively speedy method, Schoenhaage and Strassen weren't happy with it. For the most mathematician-ass reason I can think of, it wasn't pretty enough. The conjecture that sometime in the future, someone else would beat their algorithm with one that could solve the multiplication problem in n log n time. What they imagined must be the speed limit of the function because it's so much nicer to look at. Fellow theorist and rapid multiplication researcher Martin Führer described the attitude persisting even after their record stood for 40 years. It was kind of a general consensus that multiplication is such an important basic operation that, just from an aesthetic point of view, such an important operation requires a nice complexity bound. Mathematicians never change. Führer discovered a method that was incrementally better, ushering in a number of new algorithms that crept closer and closer to that speculative n log n time frame, until March 18th of this year when David Harvey and Joris van der Hoven published a method for integer multiplication in n log n time. What most people in the know consider to be the optimal speed. I've linked the paper in the description. It uses a smattering of techniques that we've talked about. It breaks the numbers into smaller chunks. It uses fast Fourier transforms recursively. All sorts of wacky stuff. But although the theoretical number of steps required to execute the Harvey Hoven multiplication algorithm doesn't grow as quickly as other techniques discovered to date, there's an awful lot of upfront bookkeeping and shuffling that needs to happen in order to make it work. At low values, it's much, much slower than even peasant multiplication, which is why the authors hard-coded a cutoff of two to the power of 1729 to the power of 12 or 10 to this power. The beastly method defaults to other fast multiplication algorithms at numbers lower than this threshold, which has many orders of magnitude larger than the number of electrons in the observable universe. Mathematicians never change. We don't often spare a lot of thought for all the weird little number shuffling techniques we learned in our math classes, and it's easy to imagine why some crotchety people would complain about new methods. Learning how to do it reliably at all is actually a pretty impressive testament to how smart the average person is. It's a little comforting to think that even at the highest levels of mathematics and computer science, some of the smartest people in the world are still struggling to figure out how to do their times tables faster. Do you know any cool techniques for making arithmetic a little easier? Please leave a comment below and let me know what you think. Quick bit of housekeeping. One, if you haven't joined the THUNK Discord channel, please, please join it. It's so, so cool. Two, I'm going to be moving cross-country in the next week and Thanksgiving and all sorts of other stuff. So next episode happens when I find a place to hang these up. Yep. Anyways, thank you very much for watching. Don't forget to blah, blah, subscribe, blah, share, and don't stop thunking.