meta data for this page
This is an old revision of the document!
Bit & Byte
The Bit is the simplest form; it's a signal that can be true or false, with its official English equivalents being “TRUE” or “FALSE,” or even simply 0 or 1. There is no 2 anymore because two is represented by 10 according to the rules of the binary number system, which in this case is not ten but one zero. To clearly distinguish this, we write numbers in the decimal number system “just like that,” for example, 10. If this is a number in the binary number system, then we denote it as 2#10.
The decimal number system stems from the fact that we have ten fingers, and historically, we used them to perform all our calculations. If we had, say, three fingers on each hand, meaning six in total, then we would probably be using the hexadecimal number system now. Computing is based on the above yes-or-no, meaning the binary number system, which is why we often use the hexadecimal number system. More on that later. Let's first look at the binary number system through a byte to see how it works.
A byte is a variable type consisting of 8 bits. The value stored in it must be somewhere between 0 and 255, depending on the bit positions. The example below may help you understand this a little:
