UNICODE Basics: What's Character Set, Character Encoding, UTF-8, and All That?

, , …,

What's Character Encoding?

Any file has to go thru encoding/decoding in order to be properly stored as file or displayed on screen. Suppose your language is Chinese (or Japanese, Russian, Arabic, or even English.). Your computer needs a way to translate the character set of your language's writing system into a sequence of 1s and 0s. This transformation is called Character encoding.

There are many encoding systems. The most popular encoding systems used today are:

What's a Character Set?

A character set is a fixed collection of symbols. For example, the English alphabet “A” to “Z” and “a” to “z” can be a character set, with a total of 52 symbols.

One of the simplest standardized character set is “ASCII”, started from 1960s, and is almost the only one used in USA up to 1990s. (ASCII = American Standard Code for Information Interchange). ASCII contains 128 symbols. It includes all the {letters, digits, punctuations} you see on a PC keyboard.

ASCII is designed for Latin alphabets only. ASCII cannot be used for Arabic alphabet (أبجدية عربية‎), Russian alphabet (русский алфавит), Chinese characters (漢字), etc. Also, ASCII does not contain symbols such as { © α β « » …}. Nor can ASCII be used for some European languages that has characters such as è é å ø ü.

Character Set and Encoding System

Character Set and Encoding System are different concepts, but often confused together. A char set is just a standardized set of chars. A encoding system is a standardized way to transform each char in a char set into a number.

In the early days of computing, these two concepts are not clearly made distinct, and are just called a char set or encoding system. For example, ascii does not really separate the concepts, since it's very simple, dealing with only 128 chars (including invisible “control characters” (code sequence)). Another example: HTML has <meta http-equiv="Content-Type" content="text/html;charset=utf-8">; the syntax contains the word “charset”, but it's actually about encoding, not charset. 〔☛ HTML: Character Sets and Encoding

A encoding system defines a character set implicitly. Because it needs to define what characters (symbols) it is designed to handle.

Unicode's Character Set and Encoding Systems

Unicode's Character Set

Unicode's character set includes ALL human language's written symbols. It includes the tens of thousands Chinese characters, math symbols, as well as characters of dead languages, such as Egyptian hieroglyphs. 〔☛ Sample Characters of Unicode

Unicode Search

Unicode Character's Code Point

Each character in Unicode is given a unique ID. This id is a number (integer), and is called the char's code point.

For example, the code point for the greek alpha α char is 945. In hexadecimal it's “3b1”. In the standard Unicode notation it is written as “U+03B1”.

Unicode's Encoding System: UTF-8, UTF-16, …

Then, Unicode defines several encoding system. UTF-8 and UTF-16 are the two most popular Unicode encoding systems. Each encoding system has advantages and disadvantages.

UTF-8 is suitable for texts that are mostly Latin alphabet letters. For example, English, Spanish, French. Most Linux's files are in UTF-8 by default. UTF-8 encoding system is backwards compatible with ASCII. (meaning: If a file only contain characters in ASCII, then encoding the file using UTF-8 results the same byte sequence as using ASCII as encoding scheme.)

UTF-16 is more modern coding system designed for Unicode. With UTF-16, every char is encoded into least 2 bytes, and commonly used characters in Unicode are exactly 2 bytes. For Asian languages, UTF-16 is more efficient. Smaller file size and less complexity in processing.

There's also UTF-32, which always uses 4 bytes per character. It creates larger file size, but is simpler to parse. Currently, UTF-32 is not being used much.

Decoding

When a editor opens a file, it needs to know the encoding system used, in order to decode the binary stream and map it to fonts to display the original characters properly. In general, the info about the encoding system used for a file is not bundled with the file.

Before internet, there's not much problem because most English speaking world use ASCII, and non-English regions use encoding schemes particular to their regions.

With internet, files in different languages started to exchange a lot. When opening a file, Windows applications may try to guess the encoding system used, by some heuristics. When opening a file in a app that assumed a wrong encoding, typically the result is gibberish. Usually, you can explicitly tell a app to use a particular encoding to open the file. (⁖ in web browsers, usually there's a menu. In Firefox, under View, Character Encoding.) Similarly, when saving a file, there's usually a option for you to specify what encoding to use. For example, in Microsoft Notepad, when you save a file, there's a “Encoding” menu at the bottom of the Save dialog.

Fonts

When a computer has decoded a file, it then needs to display the characters as glyphs on the screen. For our purposes, this set of glyphs is a font. So, your computer now needs to map the Unicode code points to a font.

For Asian languages, such as Chinese, Japanese, Korean, or languages using Arabic alphabet as its writing system (Arabic, Persian), you also need the proper font to display the file correctly.

See: Best Unicode Fonts for Programing.

Input Method

For languages that are not based on alphabet, such as Chinese, you need a input method to type it. For a example, see: Emacs Chinese Input for Studying Chinese.

What's the Most Popular Encoding?

It's Unicode's UTF-8, by far. Unicode Popularity on Web by Google.

The ones likely to remain widely used in the future are:

See also: Intro to Chinese Encoding; What Character Encoding Does Chinese Sites Use?.

For more detail, see: 〔General questions, relating to UTF or Encoding Form By Unicode Consortium. @ http://www.unicode.org/faq/utf_bom.html

blog comments powered by Disqus