Even if the computer data represents symbols or characters the computer hardware will still treat it as a binary number. It is up to the software program to interpret the information correctly.
The computer uses a binary numbering system as shown below on the left as compared to the decimal system we use today. Each binary digit is called a 'bit' in computer parlance. A bit can only have a value of 1 or 0 (on or off) so to count past 1 you need more than 1 bit.
Hexadecimal (also called base-16 or abbreviated to hex) numbers is a number system used to simplify the display and manipulation of numbers used in computers. As can be seen in the table it can take up to 4 binary digits to represent the symbols used in the decimal system. Working with binary digits can really be cumbersome so computer folks often work in hexadecimal. Hexadecimal is similar to decimal but adds 6 more characters to the number system (A, B, C, D, E, F) to fill out all the 16 combinations possible with 4 binary digits (bits). Two hexadecimal numbers can represent 8 bits, called a byte, that can count from 0 to 255 in decimal.
It is up to the program and computer to understand what a byte represents. It could be a binary number or a character or even interpreted as a logical representation of data such as true or false. However, it can sometimes be confusing as to what numbering system is in use. Hexadecimal numbers include a small x at the beginning to indicate that the number is being shown in hexadecimal thus x10 is equal to a decimal 16. In this notation the letters A-F can be in either upper or lower case. The x may be preceded with a 0.
 Implications of binary
In decimal we use powers of ten thus we see progressions as 10, 100, 1000, 10000, etc. In binary we see powers of two thus the progression is 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, etc. This is why computer terminology for a thousand bytes is really 1024 instead of 1000.
It is easy to translate decimal integers into the equivalent binary but not so for fractions. There are many decimal fractions that have no exact binary equivalent. Binary fractions are actually division by 2 so fractions like 1/2, 1/4, 1/8 are easy to do but others are more difficult. Binary numbers also get long quickly and become more difficult to deal with. One choice is to group them in 4's and create bytes. For really big numbers or really small numbers in decimal we typically resort to scientific notation. For binary the answer is to convert them to a floating-point number which may also lose some accuracy. (See below)
Note that computer numbers are often used even to identify the binary value of characters or such as ASCII values or for any binary information since it is easier to identify values in Hexadecimal than to use pure binary. Sometimes binary can also be represented in groups of 3 bits using Octal notation which is simply the numbers from 0 to 7 as shown in the table.
Note that we have referred to this binary data as numbers but it may also be interpreted as printable characters or any other form of data. The computer understands it as numbers but the program or user can interpret it differently.
The term byte is used to collect a number of binary bits into a character. Generally the term Byte is used to represent 8 bits, two hexadecimal digits. Thus a single byte (28) has 256 possible values ranging from 0 to 255. When abbreviated the term bit is in lower case while Byte is in upper case so a transfer rate of 300 b/s would indicate bits per second while 300 B/s would be 8 times faster. Memory and Disk capacity is normally specified in Bytes.
To carry the analogy of one Byte representing a character a bit further multiple bytes can represent a word. A computer word is the generally the size of an instruction or meaningful memory location. Computers are often identified by the size of their word. A native 32 bit computer means the word size is 32 bits or 4 bytes. A 32 bit word can hold a number up to 4 Gig (2 Gig for a signed number). To represent a larger number you might use a double word or perhaps a 64 bit computer.
There are implications to the word size as it effects limits. For example a 32bit word limits the capacity of a computer to access storage directly using one word to 4 GB. Normally numbers in a computer are signed. This means that they could be positive or negative numbers and one bit is reserved for the sign, leaving 31 bits for the number itself. This effectively halves the size of the number that can be stored in a word to 2 Gigabytes. This, for example, is the cause of the need to change the format of an SD card when the capacity exceeded 2 GB. This also is the cause of a future problem where the date stored in computer files will overflow. Time in a computer is generally stored in total seconds in one word from with a finite date. In Unix this was Jan 1 1970 UT. Dates are computed using the number of seconds in the computer clock word and this will overflow after 2030. Negative time is assigned to dates prior to 1970.
 Converting Hexadecimal to decimal
In the discussion to follow see the table above to translate the individual characters to their decimal equivalent.
In our decimal system each digit is multiplied by powers of 10 to achieve larger numbers. For hexadecimal you would need to multiply by powers of 16 instead.
Beginning right to left, take the decimal equivalent digits and add them together after multiplying by the appropriate value. For the right digit multiply by 1. For the second digit multiply by 16. For each successive digit multiply by 256, 4096, 65536, 1048576, etc. using successive powers of 16. Thus hex 2A would be 10 x 1 + 2 x 16 = 42.
Floating-point is the computer method of handling very large and very small numbers. It can deal with both positive and negative values. There is often hardware built into the computer specifically to deal with these numbers. To see how floating-point works consider the example 1024=210. In Hex this would be 400 or in binary it would be 100 0000 0000, a 1 with 10 zeros. If we stored the 1 and the 10 separately then we would need only 5 bits. One for the 1 and four for the 10. A floating-point number is stored exactly this way. If it were stored in a word (32 bits) we could devote 24 bits to the value and 8 bits for the exponent. We will actually use 7 bits for the exponent (0-127) and one bit for a sign. (a negative exponent is for very small numbers) Thus we can represent numbers clear up to 3.4 times 1038 with about 7 digits of decimal accuracy using 24 bits. You might ask, what about leaving a bit for negative numbers? Since we don't need leading zeros in floating point we can assume the first digit is always a 1. Thus we don't need to store it and we can use that bit for the sign. For double precision floating-point we could use 64 bits.
Some of the image formats used today can represent each color with a floating-point number for HDR use but even single precision is overkill for this use. A half precision number has been developed using 16 bits (saving half the space). There are 11 bits reserved for the value and 4 digits are reserved for the exponent with 1 bit used for the exponent sign. Using 11 digits for precision is still more than the 10 bits per color (30 total bits) now be touted for High definition use. In addition the range of over 65,000 distinct values provides brightness recording as good as the eye can see. Image formats supporting floating-point can also use half precision.
See https://en.wikipedia.org/wiki/Floating-point_arithmetic#Floating-point_numbers in wikipedia for more information.