 In the book, you've read about floating-point data values. What are they, and where does that name come from? In general, a floating-point number is one that has a decimal, in contrast with integers, that is, whole numbers. As to the origin of the term, a bit of history. Back in the old days, when computers were primarily used in the business world, common business-oriented programming languages let you specify that a value had a fixed number of decimal places to the right of the decimal point. So a fixed decimal to value would always be represented with exactly two decimal places, no more, no fewer. The business world was happy with this. Scientists, not so much. To do scientific calculations accurately requires varying numbers of decimal places. In other words, the decimal point can't be fixed in place. It has to float according to the demands of the calculation, and that's where the term comes from. In many programming languages, the float data type specifically means a representation that takes up 32 bits, can represent numbers from 10 to the minus 38 to 10 to the 38th power, and is accurate from 6 to 9 digits. However, that's not a large enough range or precision for some scientific calculations, so there's another data type called double, which uses twice the space and gives you a phenomenally greater range and almost twice the precision. In fact, double is often called double precision for that very reason. In most Python implementations, the float class, which floating point values belong to, uses a double precision format internally to give you the best range and accuracy for your calculations.