Abstract
This paper analyzes two types of numerical formats: the Binary Coded Decimal and the Floating Point format in a field of compare and contrast. In addition to, this paper determine whether or not the Floating Point format is memory efficient. Moreover, this paper selects the format recommended for processing efficiency and explain why. Finally, this paper determines differentiate between the exponential format, processing speed and accuracy.
Numeric Precision
The BCD format and the Floating Point format are common representing numerical data formats. How many differences between the BCD format and the Floating Point format? They are different features: precision, performance in calculations and memory usage.
Binary Coded Decimal, BCD is known as packet decimal and is numbers o through nine converted to four-digit binary. For example, zero can convert 0000, and nine converts 1001. The number 18 would have a BCD number of 0001 1000 or 00011000. However, in binary, 18 is represented as 10010. According to Hyde (2009), BCD values are a sequence of nibbles with each nibble representing a value in the range zero through nine. (see the Figure 1.) BCD storage is not particularly memory efficient. For example, an eight-bit BCD variable can represent values in the range zero to 99 while that same eight bits when holding a binary value can represent values in the range 0 to 255. A 16-bit BCD value can only represent values in the range 0 to 65535 while a 16 bit BCD value can only represent about 1/6 of those values (0 to 9999). There are problems not only storage problem but also speed. Simplified devices use BCD format because the numbers can show a binary format. The BCD value is an unassigned 8 bit integer.
In the other hand, computers store all data and program instructions in binary form. However, when complex numbers such as graphic images, video and audio store with binary, the size will be big, and it needs more space in the memory. That is why programmers use the floating point numbers. According to Englander, floating point numbers allow the computer to maintain a limited, fixed number of digits of accuracy together with a power that shifts the point left or right within the number to make the number larger or smaller (Eglander 2009). The format consists of a sign, two exponents and five digits mantissa (ex. 05324680). According to IEEE-754 Single Precision Float, IEEE floating point numbers have three basic components: the sign, the exponent, and the mantissa. The mantissa combines the fraction and an implicit leading digit. The exponent base (2) is implicit and must not be stored (Hollasch, 2005).
In accuracy, the Floating Point format is a maximizer than the BCD format. While BCD formats work the same in principle, binary fractions are different form decimal fractions in what numbers they can accurately represent with a given number of digits, and thus also in what numbers result in rounding errors. Specifically, binary can only represent those numbers as a finite fraction where the denominator is a power of 2. Unfortunately, this does not include most of the numbers that can be represented as finite fraction in base, like 1. However, given the format of numbers with single precision IEEE Standard 754 can calculate the ranger for the submission of real numbers in this format. In the case of 32 bit, there is a single-precision, and also there is a double-precision in case of 64 bit. The error precision of the representation of the most innocuous in computer calculations and usually many programmers are not paying any attention.
BCD arithmetic is considerably slower and takes more memory than binary arithmetic. It is used primarily in financial business and other situations where rounding errors are intolerable. Pocket calculates use BCD (All-business.com). However, using floating point can reduce memory consumption for large arrays. Before using floating-point data in