Conversion between numbers and strings

From LiteratePrograms

Jump to: navigation, search

This program is under development.
Please help to debug it. When debugging
is complete, remove the {{develop}} tag.

This is a placeholder for a general discussion of the fundamental algorithms that are used to convert numbers to strings and strings to numbers. These algorithms are used by nearly all programming language compilers and assemblers in order to convert a numeric literal such as:

  • 0x2A (hexadecimal)
  • 052 (octal)
  • %101010 (binary)
  • 42 (decimal)

into a number that will converted (if necessary) into a twos-complement binary value and stored inside the computer.

Most programming languages also provide functions that transparently convert from the internal twos-complement form to binary, octal, hexadecimal, decimal, character (or whatever) for printing. For example within the statements:

1: int i = 42
2: print i;

line 2 will cause a string conversion function to be called which converts the 32-bit twos-complement binary value 00000000000000000000000000101010 to be converted into the decimal number 42, and the digits of this number to be converted into the characters '4' and '2' before being printed.

An understanding of the algorithms needed for these conversions is fundamental to understanding the computer representation of numeric quantities, and is also fairly instructive in its own right. The algorithms themselves can be adapted to create a general library for the conversion and display of numbers in any base.

Reference

Randall Hyde, Write Great Code (Volume 1), No Starch Press, 2004

Download code
Views