Computer numbers

From MobileRead
(Redirected from Hexadecimal)
Jump to: navigation, search

Computer numbers are the basic units used and manipulated in a digital computer. The term binary is used to describe the raw data as it is stored in a computer. The smallest unit of the binary system is the "bit" which defines only a 1 or a 0. Multiple bits are used to define larger numbers.

Contents

[edit] Overview

Even if the computer data represents symbols or characters the computer hardware will still treat it as a binary number. It is up to the software program to interpret the information correctly.

The computer uses a binary numbering system as shown below on the left as compared to the decimal system we use today. Each binary digit is called a 'bit' in computer parlance. A bit can only have a value of 1 or 0 (on or off) so to count past 1 you need more than 1 bit.

[edit] table

decimal binary powers
0 0
1 1
20
2 10
21
3 11
4 100
22
5 101
6 110
7 111
8 1000
23
9 1001
 
hexadecimal binary decimal
0 0000 0
1 0001 1
2 0010 2
3 0011 3
4 0100 4
5 0101 5
6 0110 6
7 0111 7
8 1000 8
9 1001 9
A 1010 10
B 1011 11
C 1100 12
D 1101 13
E 1110 14
F 1111 15

[edit] Hexadecimal

Hexadecimal (also called base-16 or abbreviated to hex) numbers is a number system used to simplify the display and manipulation of numbers used in computers. As can be seen in the table it can take up to 4 binary digits to represent the symbols used in the decimal system. Working with binary digits can really be cumbersome so computer folks often work in hexadecimal. Hexadecimal is similar to decimal but adds 6 more characters to the number system (A, B, C, D, E, F) to fill out all the 16 combinations possible with 4 binary digits (bits). Two hexadecimal numbers can represent 8 bits, called a byte, that can count from 0 to 255 in decimal.

It is up to the program and computer to understand what a byte represents. It could be a binary number or a character or even interpreted as a logical representation of data such as true or false. However, it can sometimes be confusing as to what numbering system is in use. Hexadecimal numbers include a small x at the beginning to indicate that the number is being shown in hexadecimal thus x10 is equal to a decimal 16. In this notation the letters A-F can be in either upper or lower case. The x may be preceded with a 0.

[edit] Implications of binary

In decimal we use powers of ten thus we see progressions as 10, 100, 1000, 10000, etc. In binary we see powers of two thus the progression is 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, etc. This is why computer terminology for a thousand bytes is really 1024 instead of 1000.

It is easy to translate decimal integers into the equivalent binary but not so for fractions. There are many decimal fractions that have no exact binary equivalent. Binary fractions are actually division by 2 so fractions like 1/2, 1/4, 1/8 are easy to do but others are more difficult. Binary numbers also get long quickly and become more difficult to deal with. One choice is to group them in 4's and create bytes. For really big numbers or really small numbers in decimal we typically resort to scientific notation. For binary the answer is to convert them to a floating-point number which may also lose some accuracy.

Note that computer numbers are often used even to identify the binary value of characters or such as ASCII values or for any binary information since it is easier to identify values in Hexadecimal than to use pure binary. Sometimes binary can also be represented in groups of 3 bits using Octal notation which is simply the numbers from 0 to 7 as shown in the table.

Note that we have referred to this binary data as numbers but it may also be interpreted as printable characters or any other form of data. The computer understands it as numbers but the program or user can interpret it differently.

[edit] Byte

The term byte is used to collect a number of binary bits into a character. Generally the term Byte is used to represent 8 bits, two hexadecimal digits. Thus a single byte (28) has 256 possible values ranging from 0 to 255. When abbreviated the term bit is in lower case while Byte is in upper case so a transfer rate of 300 b/s would indicate bits per second while 300 B/s would be 8 times faster. Memory and Disk capacity is normally specified in Bytes.

[edit] Larger Bytes

Multiple Bytes can have a prefix that specifies multiples of 1,000. These prefixes are K for Kilo, M for Mega, G for Gigi and T for Terra. For example 1000 bytes in decimal would be 1 KB (Note 1Kb would be 1000 bits). Similarly a 1000 KBs would be 1 MB (1 Megabyte). While 1,000 MB would be 1 GB (1 Gigabyte) that is 1,000,000,000 bytes.

Technically in binary or Hexadecimal it would be multiples of 1024 bytes to make 1 MB. Thus the hexadecimal number 400 is 1 KB. F FFFF would be just under 1 MB. This accounts for differences in how the capacity of a disk drive or memory module might be specified.

[edit] Word

To carry the analogy of one Byte representing a character a bit further multiple bytes can represent a word. A computer word is the generally the size of an instruction or meaningful memory location. Computers are often identified by the size of their word. A native 32 bit computer means the word size is 32 bits or 4 bytes. A 32 bit word can hold a number up to 4 Gig (2 Gig for a signed number). To represent a larger number you might use a double word or perhaps a 64 bit computer.

There are implications to the word size as it effects limits. For example a 32bit word limits the capacity of a computer to access storage directly using one word to 4 GB. Normally numbers in a computer are signed. This means that they could be positive or negative numbers and one bit is reserved for the sign, leaving 31 bits for the number itself. This effectively halves the size of the number that can be stored in a word to 2 Gigabytes. This, for example, is the cause of the need to change the format of an SD card when the capacity exceeded 2 GB. This also is the cause of a future problem where the date stored in computer files will overflow. Time in a computer is generally stored in total seconds in one word from with a finite date. In Unix this was Jan 1 1970 UT. Dates are computed using the number of seconds in the computer clock word and this will overflow after 2030. Negative time is assigned to dates prior to 1970.

[edit] Converting Hexadecimal to decimal

In the discussion to follow see the table above to translate the individual characters to their decimal equivalent or view below for the translation of full 2 Hex digits.

In our decimal system each digit is multiplied by powers of 10 to achieve larger numbers. For hexadecimal you would need to multiply by powers of 16 instead. Beginning right to left, take the decimal equivalent digits and add them together after multiplying by the appropriate value. For the right digit multiply by 1. For the second digit multiply by 16. For example hex 2A would be 10 x 1 + 2 x 16 = 42. For each successive digit multiply by 256, 4096, 65536, 1048576, etc. using successive powers of 16. Two digits as shown below can count to 255 from 0. Three digits can go to 4095 while four can reach 65535.

[edit] Hexadecimal translation tables

Hex Dec Hex Dec Hex Dec Hex Dec Hex Dec Hex Dec Hex Dec Hex Dec
00 00 10 16 20 32 30 48 40 64 50 80 60 96 70 112
01 01 11 17 21 33 31 49 41 65 51 81 61 97 71 113
02 01 12 18 22 34 32 50 42 66 52 82 62 98 72 114
03 03 13 19 23 35 33 51 43 67 53 83 63 99 73 115
04 04 14 20 24 36 34 52 44 68 54 84 64 100 74 116
05 05 15 21 25 37 35 53 45 69 55 85 65 101 75 117
06 06 16 22 26 38 36 54 46 70 56 86 66 102 76 118
07 07 17 23 27 39 37 55 47 71 57 87 67 103 77 119
08 08 18 24 28 40 38 56 48 72 58 88 68 104 78 120
09 09 19 25 29 41 39 57 49 73 59 89 69 105 79 121
0A 10 1A 26 2A 42 3A 58 4A 74 5A 90 6A 106 7A 122
0B 11 1B 27 2B 43 3B 59 4B 75 5B 91 6B 107 7B 123
0C 12 1C 28 2C 44 3C 60 4C 76 5C 92 6C 108 7C 124
0D 13 1D 29 2D 45 3D 61 4D 77 5D 93 6D 109 7D 125
0E 14 1E 30 2E 46 3E 62 4E 78 5E 94 6E 110 7E 126
0F 15 1F 31 2F 47 3F 63 4F 79 5F 95 6F 111 7F 127
Hex Dec Hex Dec Hex Dec Hex Dec Hex Dec Hex Dec Hex Dec Hex Dec
80 128 90 144 A0 160 B0 176 C0 192 D0 208 E0 224 F0 240
81 129 91 145 A1 161 B1 177 C1 193 D1 209 E1 225 F1 241
82 130 92 146 A2 162 B2 178 C2 194 D2 210 E2 226 F2 242
83 131 93 147 A3 163 B3 179 C3 195 D3 211 E3 227 F3 243
84 132 94 148 A4 164 B4 180 C4 196 D4 212 E4 228 F4 244
85 133 95 149 A5 165 B5 181 C5 197 D5 213 E5 229 F5 245
86 134 96 150 A6 166 B6 182 C6 198 D6 214 E6 230 F6 246
87 135 97 151 A7 167 B7 183 C7 199 D7 215 E7 231 F7 247
88 136 98 152 A8 168 B8 184 C8 200 D8 216 E8 232 F8 248
89 137 99 153 A9 169 B9 185 C9 201 D9 217 E9 233 F9 249
8A 138 9A 154 AA 170 BA 186 CA 202 DA 218 EA 234 FA 250
8B 139 9B 155 AB 171 BB 187 CB 203 DB 219 EB 235 FB 251
8C 140 9C 156 AC 172 BC 188 CC 204 DC 220 EC 236 FC 252
8D 141 9D 157 AD 173 BD 189 CD 205 DD 221 ED 237 FD 253
8E 142 9E 158 AE 174 BE 190 CE 206 DE 222 EE 238 FE 254
8F 143 9F 159 AF 175 BF 191 CF 207 DF 223 EF 239 FF 255

[edit] floating-point

Floating-point is the computer method of handling very large and very small numbers. It can deal with both positive and negative values. There is often hardware built into the computer specifically to deal with these numbers. To see how floating-point works consider the example 1024=210. In Hex this would be 400 or in binary it would be 100 0000 0000, a 1 with 10 zeros. If we stored the 1 and the 10 separately then we would need only 5 bits. One for the 1 and four for the 10. A floating-point number is stored exactly this way. If it were stored in a word (32 bits) we could devote 24 bits to the value and 8 bits for the exponent. We will actually use 7 bits for the exponent (0-127) and one bit for a sign. (a negative exponent is for very small numbers) Thus we can represent numbers clear up to 3.4 times 1038 with about 7 digits of decimal accuracy using 24 bits. You might ask, what about leaving a bit for negative numbers? Since we don't need leading zeros in floating point we can assume the first digit is always a 1. Thus we don't need to store it and we can use that bit for the sign. For double precision floating-point we could use 64 bits.

Some of the image formats used today can represent each color with a floating-point number for HDR use but even single precision is overkill for this use. A half precision number has been developed using 16 bits (saving half the space). There are 11 bits reserved for the value and 4 digits are reserved for the exponent with 1 bit used for the exponent sign. Using 11 digits for precision is still more than the 10 bits per color (30 total bits) now be touted for high definition use. In addition the range, using the exponent, of over 65,000 distinct values provides brightness recording as good as the eye can see. Image formats supporting floating-point can also use half precision floating-point.

See https://en.wikipedia.org/wiki/Floating-point_arithmetic#Floating-point_numbers in wikipedia for more information.

Personal tools
Namespaces

Variants
Actions
Navigation
MobileRead Networks
Toolbox