UTF-7

UTF-7

UTF-7 (7-bit Unicode Transformation Format) is a variable-length character encoding that was proposed for representing Unicode text using a stream of ASCII characters. It was originally intended to provide a means of encoding Unicode text for use in Internet E-mail messages that was more efficient than the combination of UTF-8 with quoted-printable.

Contents

Motivation

MIME, the modern standard of E-mail format, forbids encoding of headers using byte values above the ASCII range. Although MIME allows encoding the message body in various character sets (broader than ASCII), the underlying transmission infrastructure (SMTP, the main E-mail transfer standard) is still not guaranteed to be 8-bit clean. Therefore, a non-trivial content transfer encoding has to be applied in case of doubt. Unfortunately base64 has a disadvantage of making even US-ASCII characters unreadable in non-MIME clients. On the other hand, UTF-8 combined with quoted-printable produces a very size-inefficient format requiring 6–9 bytes for non-ASCII characters from the BMP and 12 bytes for characters outside the BMP.

Provided certain rules are followed during encoding, UTF-7 can be sent in e-mail without using an underlying MIME transfer encoding, but still must be explicitly identified as the text character set. In addition, if used within e-mail headers such as "Subject:", UTF-7 must be contained in MIME encoded words identifying the character set. Since encoded words force use of either quoted-printable or base64, UTF-7 was designed to avoid using the = sign as an escape character to avoid double escaping when it is combined with quoted-printable (or its variant, the RFC 2047/1522 ?Q?-encoding of headers).

UTF-7 is generally not used as a native representation within applications as it is very awkward to process. Despite its size advantage over the combination of UTF-8 with either quoted-printable or base64, the Internet Mail Consortium recommends against its use.[1]

8BITMIME has also been introduced, which reduces the need to encode message bodies in a 7-bit format.

A modified form of UTF-7 is currently used in the IMAP e-mail retrieval protocol for mailbox names.[2]

Description

UTF-7 was first proposed as an experimental protocol in RFC 1642, A Mail-Safe Transformation Format of Unicode. This RFC has been made obsolete by RFC 2152, an informational RFC which never became a standard. As RFC 2152 clearly states, the RFC "does not specify an Internet standard of any kind". Despite this RFC 2152 is quoted as the definition of UTF-7 in the IANA's list of charsets. Neither is UTF-7 a Unicode Standard. The Unicode Standard 5.0 only lists UTF-8, UTF-16 and UTF-32. There is also a modified version, specified in RFC 2060, which is sometimes identified as UTF-7.

Some characters can be represented directly as single ASCII bytes. The first group is known as "direct characters" and contains 62 alphanumeric characters and 9 symbols: ' ( ) , - . / : ?. The direct characters are considered very safe to include literally. The other main group, known as "optional direct characters", contains all other printable characters in the range U+0020–U+007E except ~ \ + and space. Using the optional direct characters reduces size and enhances human readability but also increases the chance of breakage by things like badly designed mail gateways and may require extra escaping when used in encoded words for header fields.

Space, tab, carriage return and line feed may also be represented directly as single ASCII bytes. However, if the encoded text is to be used in e-mail, care is needed to ensure that these characters are used in ways that do not require further content transfer encoding to be suitable for e-mail. The plus sign (+) may be encoded as +-.

Other characters must be encoded in UTF-16 (hence U+10000 and higher would be encoded into surrogates) and then in modified Base64. The start of these blocks of modified Base64 encoded UTF-16 is indicated by a + sign. The end is indicated by any character not in the modified Base64 set. If the character after the modified Base64 is a - (ASCII hyphen-minus) then it is consumed by the decoder and decoding resumes with the next character. Otherwise decoding resumes with the character after the base64.

Confusingly, Microsoft in its .NET documentation calls its LEB128 string length encoding UTF-7: "A length-prefixed string represents the string length by prefixing to the string a single byte or word that contains the length of that string. This method first writes the length of the string as a UTF-7 encoded unsigned integer, and then writes that many characters to the stream by using the BinaryWriter instance's current encoding." The accompanying example code, however, shows that instead of UTF-7, a little-endian Variable-length quantity identical to LEB128 is used; and that in fact the count is a byte count and not a character count.[3]

Examples

  • "Hello, World!" is encoded as "Hello, World!"
  • "1 + 1 = 2" is encoded as "1 +- 1 = 2"
  • "£1" is encoded as "+AKM-1". The Unicode code point for the pound sign is U+00A3 (which is 00A316 in UTF-16), which converts into modified Base64 as in the table below. There are two bits left over, which are padded to 0.
Hex digit 0 0 A 3  
Bit pattern 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1 1 0 0
Index 0 10 12
Base64-Encoded A K M

Algorithm for encoding and decoding

Encoding

First, an encoder must decide which characters to represent directly in ASCII form, which + has to be escaped as +-, and which to place in blocks of Unicode characters. A simple encoder may encode all characters it considers safe for direct encoding directly. However the cost of coming out of a Unicode block to represent a single character and then going directly back in is 3 to 3⅔ bytes, this is more than the 2⅔ bytes needed to represent such a character as a part of a Unicode sequence. Each Unicode sequence must be encoded using the following procedure, then surrounded by the appropriate delimiters.

Using the £† (U+00A3 U+2020) character sequence as an example:

  1. Express the character’s Unicode numbers (UTF-16) in Binary:
    0x00A3 → 0000 0000 1010 0011
    0x2020 → 0010 0000 0010 0000
  2. Concatenate the binary sequences
    0000 0000 1010 0011 and 0010 0000 0010 0000 → 0000 0000 1010 0011 0010 0000 0010 0000
  3. Regroup the binary into groups of six bits, starting from the left:
    0000 0000 1010 0011 0010 0000 0010 0000 → 000000 001010 001100 100000 001000 00
  4. If the last group has less than six bits, add trailing zeros:
    000000 001010 001100 100000 001000 00 → 000000 001010 001100 100000 001000 000000
  5. Replace each group of six bits with a respective Base64 code:
    000000 001010 001100 100000 001000 000000 → AKMgIA

Decoding

First an encoded data must be separated into plain ASCII text chunks (including +es followed by a dash) and nonempty Unicode blocks as mentioned in the description section. Once this is done, each Unicode block must be decoded with the following procedure (using the result of the encoding example above as our example)

  1. Express each Base64 code as the bit sequence it represents:
    AKMgIA → 000000 001010 001100 100000 001000 000000
  2. Regroup the binary into groups of sixteen bits, starting from the left:
    000000 001010 001100 100000 001000 000000 → 0000000010100011 0010000000100000 0000
  3. If there is an incomplete group at the end, discard it (If the incomplete group contains more than four bits or contains any ones, the code is invalid):
    0000000010100011 0010000000100000
  4. Each group of 16 bits is a character's Unicode (UTF-16) number and can be expressed in other forms:
    0000 0000 1010 0011 ≡ 0x00A3 ≡ 16310

Security

UTF-7 allows multiple representations of the same source string. In particular ASCII characters can be represented as part of Unicode blocks. As such if standard ASCII based escaping or validation processes are used on UTF-7 then Unicode blocks may be used to slip malicious strings past them.

Older versions of Internet Explorer can be tricked into interpreting the page as UTF-7. This can be used for a cross-site scripting attack as the < and > marks can be encoded as +ADw- and +AD4- in UTF-7, which most validators let through as simple text.[4]

References

  1. ^ Internet Mail Consortium, Using International Characters in Internet Mail, 1 August 1998, retrieved 8 January 2009
  2. ^ RFC 3501 section 5.1.3
  3. ^ MSDN, http://msdn.microsoft.com/en-us/library/yzxa6408.aspx, retrieved 11 August 2011
  4. ^ http://code.google.com/p/doctype/wiki/ArticleUtf7

See also


Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • UTF-8 — (8 bit Unicode Transformation Format) es un formato de codificación de caracteres Unicode e ISO 10646 utilizando símbolos de longitud variable. UTF 8 fue creado por Robert C. Pike y Kenneth L. Thompson. Está definido como estándar por la RFC 3629 …   Wikipedia Español

  • UTF-8 — (от англ. Unicode Transformation Format, 8 bit  «формат преобразования Юникода, 8 битный»)  распространённая кодировка символов Юникода, совместимая с 8 битными форматами передачи текста. Нашла широкое применение в операционных… …   Википедия

  • UTF-8 — (Abk. für 8 bit UCS Transformation Format wobei UCS wiederum Universal Character Set abkürzt) ist die am weitesten verbreitete Kodierung für Unicode Zeichen (Unicode und UCS sind praktisch identisch). Die Kodierung wurde im September 1992 von Ken …   Deutsch Wikipedia

  • UTF-7 — (7 bit Unicode Transformation Format) es una codificación de caracteres de longitud variable que fue propuesta para representar texto codificado con Unicode usando un flujo de caracteres ASCII, para ser usado, por ejemplo en mensajes de correo… …   Wikipedia Español

  • UTF-8 — (UCS Transformation Format  8 bit[1]) is a multibyte character encoding for Unicode. Like UTF 16 and UTF 32, UTF 8 can represent every character in the Unicode character set. Unlike them, it is backward compatible with ASCII and avoids the… …   Wikipedia

  • UTF-16 — (англ. Unicode Transformation Format) в информатике один из способов кодирования символов из Unicode в виде последовательности 16 битных слов. Данная кодировка позволяет записывать символы Юникода в диапазонах U+0000..U+D7FF и… …   Википедия

  • UTF-16 — est un codage des caractères définis par Unicode où chaque caractère est codé sur une suite de un ou deux mots de 16 bits. Le codage était défini dans le rapport technique 17 à la norme Unicode. Depuis, cette annexe est devenue obsolète car UTF… …   Wikipédia en Français

  • Utf-16 — Unicode Jeux de caractères UCS (ISO/CEI 10646) ISO 646, ASCII ISO 8859 1 WGL4 UniHan Équivalences normalisées NFC (précomposée) NFD (décomposée) NFKC (compatibilité) NFKD (compatibilité) Propriétés et algorithmes ISO 15924 …   Wikipédia en Français

  • Utf-8 — (от англ. Unicode Transformation Format формат преобразования Юникода) в настоящее время распространённая кодировка, реализующая представление Юникода, совместимое с 8 битным кодированием текста. Текст, состоящий только из символов с номером… …   Википедия

  • UTF-8 — (UCS transformation format 8 bits) est un format de codage de caractères. Chaque caractère ou graphème est représenté dans le répertoire du jeu universel de caractères sous la forme d’une suite d’un ou plusieurs « caractères abstraits » …   Wikipédia en Français

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”