 As we learned in the first chapter, computers use binary digits, numbers, at the hardware level. If all we have is numbers, how do we represent characters, letters, and symbols in numeric form? We use encoding. Remember in grade school when you had a simple code for passing secret messages where A became 1, B became 2, C became 3, and so on? That's the idea. We set up a numeric correspondence for each character. The grade school approach is the right concept, but it's too simplistic for our needs. One standard encoding for characters that includes upper and lowercase numerals and punctuation marks is ASCII, the American Standard Code for Information Interchange. It uses one byte per character, which limits it to 256 different encodings. In ASCII, the capital A is encoded as 65, lowercase A is 97, an exclamation point has the numeric value 33, and the numeral 5 has the value 53. ASCII is great, but 256 numbers aren't enough to encode languages other than English, and definitely not enough to encode all of the Korean syllabary or the Japanese and Chinese characters. You need at least two bytes for those. In the past, each country came up with its own standard for encoding, and it was pretty much of a mess. Computer manufacturers and computer scientists got together to develop a single encoding to handle all the world's languages, Unicode. That is the system that Java uses for encoding characters as numeric values.