Basic Logical Functions and Gates |
---|
While each logical element or condition must always have a logic value of either "0" or "1", we also need to have ways to combine different logical signals or conditions to provide a logical result.
For example, consider the logical statement: "If I move the switch on the wall up, the light will turn on." At first glance, this seems to be a correct statement. However, if we look at a few other factors, we realize that there's more to it than this. In this example, a more complete statement would be: "If I move the switch on the wall up and the light bulb is good and the power is on, the light will turn on."
If we look at these two statements as logical expressions and use logical terminology, we can reduce the first statement to:
Light = Switch
This means nothing more than that the light will follow the action of the switch, so that when the switch is up/on/true/1 the light will also be on/true/1. Conversely, if the switch is down/off/false/0 the light will also be off/false/0.
Looking at the second version of the statement, we have a slightly more complex expression:
Light = Switch and Bulb and Power
Normally, we use symbols rather than words to designate the and function that we're using to combine the separate variables of Switch, Bulb, and Power in this expression. The symbol normally used is a dot, which is the same symbol used for multiplication in some mathematical expressions. Using this symbol, our three-variable expression becomes:
Light = Switch Bulb Power
When we deal with logical circuits (as in computers), we not only need to deal with logical functions; we also need some special symbols to denote these functions in a logical diagram. There are three fundamental logical operations, from which all other functions, no matter how complex, can be derived. These functions are named and, or, and not. Each of these has a specific symbol and a clearly-defined behavior, as follows:
The logic gates shown above are used in various combinations to perform tasks of any level of complexity. Some functions are so commonly used that they have been given symbols of their own, and are often packaged so as to provide that specific function directly. On the next page, we'll begin our coverage of these functions.
Derived Logical Functions and Gates |
---|
While the three basic functions AND, OR, and NOT are sufficient to accomplish all possible logical functions and operations, some combinations are used so commonly that they have been given names and logic symbols of their own.
We will discuss three of these on this page. The first is called NAND, and consists of an AND function followed by a NOT function. The second, as you might expect, is called NOR. This is an OR function followed by NOT. The third is a variation of the OR function, called the Exclusive-OR, or XOR function. As with the three basic logic functions, each of these derived functions has a specific logic symbol and behavior, which we can summarize as follows:
The three derived functions shown above are by no means the only ones, but these form the basis of all the others. On the next page we will look at how the XOR function is derived. Then we will begin our look at practical applications for logic gates in various combinations, to see just how these simple gates can be combined to perform every possible operation in a computer.
Deriving the XOR Function |
---|
On the previous page we stated that the Exclusive-OR, or XOR function can be described verbally as, "Either A or B, but not both." In the realm of digital logic there are several ways of stating this in a more detailed and precise format. We won't go here into such devices as Truth tables and graphic representations. We will stick with the more complete verbal statement, "NOT A and B, or A and NOT B."
The circuit required to implement this description is shown below:
| |
The practical problem with the circuit above is that it contains three different kinds of gates: AND, OR, and NOT. While this illustrates a practical application using all three of the basic gate types, it is cumbersome to construct on a printed circuit board.
There are commercial packages which contain four XOR gates, but often only a single XOR function is wanted in a given application. What is also wanted is a way to create that function with a single IC package.
This can easily be done with a single quad two-input NAND gate, as shown in the circuit below:
| |
There are many ways in which the simple logic gates we have examined can be combined to perform useful functions. Some of these circuits produce outputs which are only dependent upon the current logic states of all inputs. These are called combinational logic circuits. Other circuits are designed to actually remember the past states of their inputs, and to produce outputs based on those past signals as well as the current states of their inputs. These circuits can act in accordance with a sequence of input signals, and are therefore known as sequential logic circuits.
In these pages, we will look first at combinational circuits. Then we will move on to sequential circuits. If you wish to skip immediately to sequential circuits, use the navigational links at the top of this page to select the type of circuit you would like to examine.
Adding Binary Numbers |
---|
A key requirement of digital computers is the ability to use logical functions to perform arithmetic operations. The basis of this is addition; if we can add two binary numbers, we can just as easily subtract them, or get a little fancier and perform multiplication and division. How, then, do we add two binary numbers?
Let's start by adding two binary bits. Since each bit has only two possible values, 0 or 1, there are only four possible combinations of inputs. These four possibilities, and the resulting sums, are:
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 10
Whoops! That fourth line indicates that we have to account for two output bits when we add two input bits: the sum and a possible carry. Let's set this up as a truth table with two inputs and two outputs, and see where we can go from there.
INPUTS | OUTPUTS | Well, this looks familiar, doesn't it? The Carry output is a simple AND function, and the Sum is an Exclusive-OR. Thus, we can use two gates to add these two bits together. The resulting circuit is shown below. | ||
---|---|---|---|---|
A | B | CARRY | SUM | |
0 | 0 | 0 | 0 | |
0 | 1 | 0 | 1 | |
1 | 0 | 0 | 1 | |
1 | 1 | 1 | 0 |
To construct a full adder circuit, we'll need three inputs and two outputs. Since we'll have both an input carry and an output carry, we'll designate them as CIN and COUT. At the same time, we'll use S to designate the final Sum output. The resulting truth table is shown to the right. Hmmm. This is looking a bit messy. It looks as if COUT may be either an AND or an OR function, depending on the value of A, and S is either an XOR or an XNOR, again depending on the value of A. Looking a little more closely, however, we can note that the S output is actually an XOR between the A input and the half-adder SUM output with B and CIN inputs. Also, the output carry will be true if any two or all three inputs are logic 1. What this suggests is also intuitively logical: we can use two half-adder circuits. The first will add A and B to produce a partial Sum, while the second will add CIN to that Sum to produce the final S output. If either half-adder produces a carry, there will be an output carry. Thus, COUT will be an OR function of the half-adder Carry outputs. The resulting full adder circuit is shown below. | INPUTS | OUTPUTS | |||
---|---|---|---|---|---|
A | B | CIN | COUT | S | |
0 | 0 | 0 | 0 | 0 | |
0 | 0 | 1 | 0 | 1 | |
0 | 1 | 0 | 0 | 1 | |
0 | 1 | 1 | 1 | 0 | |
1 | 0 | 0 | 0 | 1 | |
1 | 0 | 1 | 1 | 0 | |
1 | 1 | 0 | 1 | 0 | |
1 | 1 | 1 | 1 | 1 |
| |
The circuit above is really too complicated to be used in larger logic diagrams, so a separate symbol, shown to the right, is used to represent a one-bit full adder. In fact, it is common practice in logic diagrams to represent any complex function as a "black box" with input and output signals designated. It is, after all, the logical function that is important, not the exact method of performing that function.
Now we can add two binary bits together, accounting for a possible carry from the next lower order of magnitude, and sending a carry to the next higher order of magnitude. To perform multibit addition the way a computer would, a full adder must be allocated for each bit to be added simultaneously. Thus, to add two 4-bit numbers to produce a 4-bit sum (with a possible carry), you would need four full adders with carry lines cascaded, as shown to the right. For two 8-bit numbers, you would need eight full adders, which can be formed by cascading two of these 4-bit blocks. By extension, two binary numbers of any size may be added in this manner.
It is also quite possible to use this circuit for binary subtraction. If a negative number is applied to the B inputs, the resulting sum will actually be the difference between the two numbers. We'll look at this subject in more detail in the page on Negative Numbers and Binary Subtraction.
In a modern computer, the adder circuitry will include the means of negating one of the input numbers directly, so the circuit can perform either addition or subtraction on demand. Other functions are commonly included in modern implementations of the adder circuit, especially in modern microprocessors.
Negative Numbers and Binary Subtraction |
---|
We have seen how simple logic gates can perform the process of binary addition. It is only logical to assume that a similar circuit could perform binary subtraction.
If we look at the possibilites involved in subtracting one 1-bit number from another, we can quickly see that three of the four possible combinations are easy and straight-forward. The fourth one involves a bit more:
0 - 0 = 0
1 - 0 = 1
1 - 1 = 0
0 - 1 = 1, with a borrow bit.
That borrow bit is just like a borrow in decimal subtraction: it subtracts from the next higher order of magnitude in the overall number. Let's see what the truth table looks like.
INPUTS | OUTPUTS | This is an interesting result. The difference, A-B, is still an Exclusive-OR function, just as the sum was for addition. The borrow is still an AND function, but is A'B instead of AB. What we'd like to do, now, is find an easy way to use the binary adder to perform subtraction as well. We already have half of it working: the difference output. Can we simply invert the A input so the AND gate will have the right signals? No, we can't, because that would invert the sense of the Exclusive-OR function. What would be really nice is to convert B to the negative equivalent of its value, and then use the basic adder just as it stands. To see if we can do that, let's consider negative binary numbers below. | ||
---|---|---|---|---|
A | B | BORROW | A - B | |
0 | 0 | 0 | 0 | |
0 | 1 | 1 | 1 | |
1 | 0 | 0 | 1 | |
1 | 1 | 0 | 0 |
As we discovered when looking at binary counters, once a full count is obtained, the next clock pulse will cause the counter to read zero again. Likewise if we set up a counter to count backwards, the first clock pulse will cause the count to go from all zeroes to all ones. Thinking along these lines, we can see that the binary number 1111 might represent the decimal number 15, or it could represent the number -1. On the right is the counting sequence for a 4-bit binary number, with decimal equivalents expressed in two ways. First we have the unsigned counting sequence, where all numbers are assumed to be positive. Then we see the signed sequence, which includes both positive and negative numbers. Looking at the two decimal counting sequences, we note two factors right away:
Because positive numbers are the same in both sequences, they can be used together without difficulty. We only need to keep track of how we want to define the system. And the fact that negative numbers all have the binary MSB = 1 is helpful because the MSB can immediately be used to identify the sign of the number. Indeed, the binary MSB is commonly known as the sign bit. The use of this bit to distinguish between positive and negative numbers also allows us to divide the counting sequence evenly between positive and negative numbers. Now we need to look at the relationship between the binary numbers for positive and negative versions of the same magnitude. Suppose we invert each bit of 0001 (+1) to get 1110 (-2). If we then increment the result, we get 1111 (-1), which is what we wanted. Will this relationship hold for all negative numbers? In fact, it does work, as you can determine for yourself. To form the negative of any number, first complement all bits of the number. The result is the one's complement of the original number. Then, add 1 to the result, thus forming the two's complement of the original number. Arithmetic involving such signed numbers is known generally as two's complement arithmetic. To check the validity of this process, let's take the two's complement of 0. We should logically get a result of 0. So, we start with 0000, and form the one's complement (1111). Now add 1 to the result (10000). But this won't fit in a 4-bit number, so the extra 1-bit is lost, leaving a result of (0000). Sure enough, -0 = 0, as it should. Remember to discard the carry from the highest-order bit. Two's complement arithmetic always works this way. Note: It is not possible to represent +8 as a 4-bit signed number. Therefore it is not possible to correctly take the two's complement of -8. It will come back again as -8. | Binary | Unsigned Decimal | Signed Decimal |
---|---|---|---|
0000 | 0 | 0 | |
0001 | 1 | 1 | |
0010 | 2 | 2 | |
0011 | 3 | 3 | |
0100 | 4 | 4 | |
0101 | 5 | 5 | |
0110 | 6 | 6 | |
0111 | 7 | 7 | |
1000 | 8 | -8 | |
1001 | 9 | -7 | |
1010 | 10 | -6 | |
1011 | 11 | -5 | |
1100 | 12 | -4 | |
1101 | 13 | -3 | |
1110 | 14 | -2 | |
1111 | 15 | -1 |
Now that we have an easy way to obtain the negative of any number, we can convert our original 4-bit adder circuit to an adder/subtractor. By leaving the inputs unchanged, we get the result of A + B. But if we invert B and add 1 with the low-order Cin, we get the result of A - B.
We can use Exclusive-OR gates, as shown to the right, to control whether we will add or subtract on any given occasion. With a control input of 0, the XOR gates will leave the B input number unchanged, and will also apply a logic 0 as the initial input carry. This is exactly what we want in order to add the two numbers. However, if we apply a logic 1 to the control input, the XOR gates will invert the B input number to form its one's complement, and will also add 1 through the initial input carry. This changes B to its two's complement. Thus, the output result will actually be A - B. (Note that in two's complement addition, the output carry is ignored. You can also think of it as an inverted "borrow" bit rather than as a carry, so that a carry of 1 corresponds to a borrow of 0. That logic also holds for the input carry, which also represents an input borrow bit of 0.)
When we add or subtract signed numbers, we need to introduce a new concept: overflow. Overflow occurs when the result has the wrong sign bit for the operation that was performed. For example, if we add two positive numbers (7 and 6), we should get a positive result (13). However, using 4-bit binary numbers, we would add 0111 to 0110 and get 1101 as the result. In signed notation, this is a result of -3, not +13. Therefore, an overflow has occurred, where the result would have to have more bits than the original two numbers.
This is not as much of a problem as you might think. An 8-bit number can have signed values in the range -128 to +127. A 16-bit signed number may hold any value from -32,768 to +32,767. These ranges are sufficient for most practical applications. Where they are not, modern computers can easily use 32-bit numbers (±2.14 × 109) or 64-bit numbers (±9.22 × 1018) for the purpose.
If we add a positive number to a negative number, overflow cannot occur. Likewise, if we are subtracting two numbers of the same sign, overflow is impossible. But if we add like-signed numbers or subtract unlike-signed numbers, we must be aware of the possibility of overflow, and recognize when it occurs.
Modern microprocessors are designed to recognize and report when overflow occurs in any arithmetic operation.
The Two-Input Multiplexer |
---|
One circuit I've received a number of requests for is the multiplexer circuit. This is a digital circuit with multiple signal inputs, one of which is selected by separate address inputs to be sent to the single output. It's not easy to describe without the logic diagram, but is easy to understand when the diagram is available.
A two-input multiplexer is shown below.
A very common application for this type of circuit is found in computers, where dynamic memory uses the same address lines for both row and column addressing. A set of multiplexers is used to first select the row address to the memory, then switch to the column address. This scheme allows large amounts of memory to be incorporated into the computer while limiting the number of copper traces required to connect that memory to the rest of the computer circuitry. In such an application, this circuit is commonly called a data selector.
Multiplexers are not limited to two data inputs. If we use two addressing inputs, we can multiplex up to four data signals. With three addressing inputs, we can multiplex eight signals. If you would like to see a demonstration of a four-input multiplexer, you can follow this link. This demonstration requires 64 separate images, each approximately 4K bytes in size, so it will take a little while to load. For this reason, it is not included in the list of digital pages at the top of each page. An eight-input multiplexer would require either 2048 separate images or a rather complex implementation of dynamic HTML; therefore it will not be included on these pages.
The 1-to-2 Line Decoder/Demultiplexer |
---|
The opposite of the multiplexer circuit, logically enough, is the demultiplexer. This circuit takes a single data input and one or more address inputs, and selects which of multiple outputs will receive the input signal. The same circuit can also be used as a decoder, by using the address inputs as a binary number and producing an output signal on the single output that matches the binary address input. In this application, the data input line functions as a circuit enabler — if the circuit is disabled, no output will show activity regardless of the binary input number.
A one-line to two-line decoder/demultiplexer is shown below.
Like multiplexers, demultiplexers are not limited to two data signals. If we use two addressing inputs, we can demultiplex up to four data signals. With three addressing inputs, we can demultiplex eight signals. The demonstration of the 2-to-4 line decoder/demultiplexer is much smaller than the demo for the four-input multiplexer, because it has fewer independent input signals. With one data input and two addressing inputs, the decoder/demultiplexer only needs 8 images for the full demonstration.
|
|
Boolean Algebra |
---|
One of the primary requirements when dealing with digital circuits is to find ways to make them as simple as possible. This constantly requires that complex logical expressions be reduced to simpler expressions that nevertheless produce the same results under all possible conditions. The simpler expression can then be implemented with a smaller, simpler circuit, which in turn saves the price of the unnecessary gates, reduces the number of gates needed, and reduces the power and the amount of space required by those gates.
One tool to reduce logical expressions is the mathematics of logical expressions, introduced by George Boole in 1854 and known today as Boolean Algebra. The rules of Boolean Algebra are simple and straight-forward, and can be applied to any logical expression. The resulting reduced expression can then be readily tested with a Truth Table, to verify that the reduction was valid.
The rules of Boolean Algebra are:.
- AND Operations (·)
0·0 = 0 A·0 = 0
1·0 = 0 A·1 = A
0·1 = 0 A·A = A
1·1 = 1 A·A' = 0- OR Operations (+)
0+0 = 0 A+0 = A
1+0 = 1 A+1 = 1
0+1 = 1 A+A = A
1+1 = 1 A+A' = 1- NOT Operations (')
0' = 1 A'' = A
1' = 0
- Associative Law
(A·B)·C = A·(B·C) = A·B·C
(A+B)+C = A+(B+C) = A+B+C- Distributive Law
A·(B+C) = (A·B) + (A·C)
A+(B·C) = (A+B) · (A+C)- Commutative Law
A·B = B·A
A+B = B+A- Precedence
AB = A·B
A·B+C = (A·B) + C
A+B·C = A + (B·C)
- DeMorgan's Theorem
(A·B)' = A' + B' (NAND)
(A+B)' = A' · B' (NOR)