chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
With simple gate and combinational logic circuits, there is a definite output state for any given input state. Take the truth table of an OR gate, for instance: For each of the four possible combinations of input states (0-0, 0-1, 1-0, and 1-1), there is one, definite, unambiguous output state. Whether we’re dealing with a multitude of cascaded gates or a single gate, that output state is determined by the truth table(s) for the gate(s) in the circuit, and nothing else. However, if we alter this gate circuit so as to give signal feedback from the output to one of the inputs, strange things begin to happen: We know that if A is 1, the output must be 1, as well. Such is the nature of an OR gate: any “high” (1) input forces the output “high” (1). If A is “low” (0), however, we cannot guarantee the logic level or state of the output in our truth table. Since the output feeds back to one of the OR gate’s inputs, and we know that any 1 input to an OR gates makes the output 1, this circuit will “latch” in the 1 output state after any time that A is 1. When A is 0, the output could be either 0 or 1, depending on the circuit’s prior state! The proper way to complete the above truth table would be to insert the word latch in place of the question mark, showing that the output maintains its last state when A is 0. Any digital circuit employing feedback is called a multivibrator. The example we just explored with the OR gate was a very simple example of what is called a bistable multivibrator. It is called “bistable” because it can hold stable in one of two possible output states, either 0 or 1. There are also monostable multivibrators, which have only one stable output state (that other state being momentary), which we’ll explore later; and astable multivibrators, which have no stable state (oscillating back and forth between an output of 0 and 1). A very simple astable multivibrator is an inverter with the output fed directly back to the input: When the input is 0, the output switches to 1. That 1 output gets fed back to the input as a 1. When the input is 1, the output switches to 0. That 0 output gets fed back to the input as a 0, and the cycle repeats itself. The result is a high frequency (several megahertz) oscillator, if implemented with a solid-state (semiconductor) inverter gate: If implemented with relay logic, the resulting oscillator will be considerably slower, cycling at a frequency well within the audio range. The buzzer or vibrator circuit thus formed was used extensively in early radio circuitry, as a way to convert steady, low-voltage DC power into pulsating DC power which could then be stepped up in voltage through a transformer to produce the high voltage necessary for operating the vacuum tube amplifiers. Henry Ford’s engineers also employed the buzzer/transformer circuit to create continuous high voltage for operating the spark plugs on Model T automobile engines: Borrowing terminology from the old mechanical buzzer (vibrator) circuits, solid-state circuit engineers referred to any circuit with two or more vibrators linked together as a multivibrator. The astable multivibrator mentioned previously, with only one “vibrator,” is more commonly implemented with multiple gates, as we’ll see later. The most interesting and widely used multivibrators are of the bistable variety, so we’ll explore them in detail now.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/10%3A_Multivibrators/10.01%3A_Digital_Logic_With_Feedback.txt
A bistable multivibrator has two stable states, as indicated by the prefix bi in its name. Typically, one state is referred to as set and the other as reset. The simplest bistable device, therefore, is known as a set-reset, or S-R, latch. To create an S-R latch, we can wire two NOR gates in such a way that the output of one feeds back to the input of another, and vice versa, like this: The Q and not-Q outputs are supposed to be in opposite states. I say “supposed to” because making both the S and R inputs equal to 1 results in both Q and not-Q being 0. For this reason, having both S and R equal to 1 is called an invalid or illegal state for the S-R multivibrator. Otherwise, making S=1 and R=0 “sets” the multivibrator so that Q=1 and not-Q=0. Conversely, making R=1 and S=0 “resets” the multivibrator in the opposite state. When S and R are both equal to 0, the multivibrator’s outputs “latch” in their prior states. Note how the same multivibrator function can be implemented in ladder logic, with the same results: By definition, a condition of Q=1 and not-Q=0 is set. A condition of Q=0 and not-Q=1 is reset. These terms are universal in describing the output states of any multivibrator circuit. The astute observer will note that the initial power-up condition of either the gate or ladder variety of S-R latch is such that both gates (coils) start in the de-energized mode. As such, one would expect that the circuit will start up in an invalid condition, with both Q and not-Q outputs being in the same state. Actually, this is true! However, the invalid condition is unstable with both S and R inputs inactive, and the circuit will quickly stabilize in either the set or reset condition because one gate (or relay) is bound to react a little faster than the other. If both gates (or coils) were precisely identical, they would oscillate between high and low like an astable multivibrator upon power-up without ever reaching a point of stability! Fortunately for cases like this, such a precise match of components is a rare possibility. It must be noted that although an astable (continually oscillating) condition would be extremely rare, there will most likely be a cycle or two of oscillation in the above circuit, and the final state of the circuit (set or reset) after power-up would be unpredictable. The root of the problem is a race condition between the two relays CR1 and CR2. A race condition occurs when two mutually-exclusive events are simultaneously initiated through different circuit elements by a single cause. In this case, the circuit elements are relays CR1 and CR2, and their de-energized states are mutually exclusive due to the normally-closed interlocking contacts. If one relay coil is de-energized, its normally-closed contact will keep the other coil energized, thus maintaining the circuit in one of two states (set or reset). Interlocking prevents both relays from latching. However, if both relay coils start in their de-energized states (such as after the whole circuit has been de-energized and is then powered up) both relays will “race” to become latched on as they receive power (the “single cause”) through the normally-closed contact of the other relay. One of those relays will inevitably reach that condition before the other, thus opening its normally-closed interlocking contact and de-energizing the other relay coil. Which relay “wins” this race is dependent on the physical characteristics of the relays and not the circuit design, so the designer cannot ensure which state the circuit will fall into after power-up. Race conditions should be avoided in circuit design primarily for the unpredictability that will be created. One way to avoid such a condition is to insert a time-delay relay into the circuit to disable one of the competing relays for a short time, giving the other one a clear advantage. In other words, by purposely slowing down the de-energization of one relay, we ensure that the other relay will always “win” and the race results will always be predictable. Here is an example of how a time-delay relay might be applied to the above circuit to avoid the race condition: When the circuit powers up, time-delay relay contact TD1 in the fifth rung down will delay closing for 1 second. Having that contact open for 1 second prevents relay CR2 from energizing through contact CR1 in its normally-closed state after power-up. Therefore, relay CR1 will be allowed to energize first (with a 1-second head start), thus opening the normally-closed CR1 contact in the fifth rung, preventing CR2 from being energized without the S input going active. The end result is that the circuit powers up cleanly and predictably in the reset state with S=0 and R=0. It should be mentioned that race conditions are not restricted to relay circuits. Solid-state logic gate circuits may also suffer from the ill effects of race conditions if improperly designed. Complex computer programs, for that matter, may also incur race problems if improperly designed. Race problems are a possibility for any sequential system, and may not be discovered until some time after initial testing of the system. They can be very difficult problems to detect and eliminate. A practical application of an S-R latch circuit might be for starting and stopping a motor, using normally-open, momentary pushbutton switch contacts for both start (S) and stop (R) switches, then energizing a motor contactor with either a CR1 or CR2 contact (or using a contactor in place of CR1 or CR2). Normally, a much simpler ladder logic circuit is employed, such as this: In the above motor start/stop circuit, the CR1 contact in parallel with the start switch contact is referred to as a “seal-in” contact, because it “seals” or latches control relay CR1 in the energized state after the startswitch has been released. To break the “seal,” or to “unlatch” or “reset” the circuit, the stop pushbutton is pressed, which de-energizes CR1 and restores the seal-in contact to its normally open status. Notice, however, that this circuit performs much the same function as the S-R latch. Also, note that this circuit has no inherent instability problem (if even a remote possibility) as does the double-relay S-R latch design. In semiconductor form, S-R latches come in prepackaged units so that you don’t have to build them from individual gates. They are symbolized as such: Review • A bistable multivibrator is one with two stable output states. • In a bistable multivibrator, the condition of Q=1 and not-Q=0 is defined as set. A condition of Q=0 and not-Q=1 is conversely defined as reset. If Q and not-Q happen to be forced to the same state (both 0 or both 1), that state is referred to as invalid. • In an S-R latch, activation of the S input sets the circuit, while activation of the R input resets the circuit. If both S and R inputs are activated simultaneously, the circuit will be in an invalid condition. • A race condition is a state in a sequential system where two mutually-exclusive events are simultaneously initiated by a single cause.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/10%3A_Multivibrators/10.02%3A_The_S-R_Latch.txt
It is sometimes useful in logic circuits to have a multivibrator which changes state only when certain conditions are met, regardless of its S and R input states. The conditional input is called the enable, and is symbolized by the letter E. Study the following example to see how this works: When the E=0, the outputs of the two AND gates are forced to 0, regardless of the states of either S or R. Consequently, the circuit behaves as though S and R were both 0, latching the Q and not-Q outputs in their last states. Only when the enable input is activated (1) will the latch respond to the S and R inputs. Note the identical function in ladder logic: A practical application of this might be the same motor control circuit (with two normally-open push button switches for start and stop), except with the addition of a master lockout input (E) that disables both push buttons from having control over the motor when its low (0). Once again, these multivibrator circuits are available as prepackaged semiconductor devices, and are symbolized as such: It is also common to see the enable input designated by the letters “EN” instead of just “E.” Review • The enable input on a multivibrator must be activated for either S or R inputs to have any effect on the output state. • This enable input is sometimes labeled “E”, and other times as “EN”. 10.04: The D Latch Since the enable input on a gated S-R latch provides a way to latch the Q and not-Q outputs without regard to the status of S or R, we can eliminate one of those inputs to create a multivibrator latch circuit with no “illegal” input states. Such a circuit is called a D latch, and its internal logic looks like this: Note that the R input has been replaced with the complement (inversion) of the old S input, and the S input has been renamed to D. As with the gated S-R latch, the D latch will not respond to a signal input if the enable input is 0—it simply stays latched in its last state. When the enable input is 1, however, the Q output follows the D input. Since the R input of the S-R circuitry has been done away with, this latch has no “invalid” or “illegal” state. Q and not-Q are always opposite of one another. If the above diagram is confusing at all, the next diagram should make the concept simpler: Like both the S-R and gated S-R latches, the D latch circuit may be found as its own prepackaged circuit, complete with a standard symbol: The D latch is nothing more than a gated S-R latch with an inverter added to make R the complement (inverse) of S. Let’s explore the ladder logic equivalent of a D latch, modified from the basic ladder diagram of an S-R latch: An application for the D latch is a 1-bit memory circuit. You can “write” (store) a 0 or 1 bit in this latch circuit by making the enable input high (1) and setting D to whatever you want the stored bit to be. When the enable input is made low (0), the latch ignores the status of the D input and merrily holds the stored bit value, outputting at the stored value at Q, and its inverse on output not-Q. Review • A D latch is like an S-R latch with only one input: the “D” input. Activating the D input sets the circuit, and de-activating the D input resets the circuit. Of course, this is only if the enable input (E) is activated as well. Otherwise, the output(s) will be latched, unresponsive to the state of the D input. • D latches can be used as 1-bit memory circuits, storing either a “high” or a “low” state when disabled, and “reading” new data from the D input when enabled.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/10%3A_Multivibrators/10.03%3A_The_Gated_S-R_Latch.txt
So far, we’ve studied both S-R and D latch circuits with enable inputs. The latch responds to the data inputs (S-R or D) only when the enable input is activated. In many digital applications, however, it is desirable to limit the responsiveness of a latch circuit to a very short period of time instead of the entire duration that the enabling input is activated. One method of enabling a multivibrator circuit is called edge triggering, where the circuit’s data inputs have control only during the time that the enable input is transitioning from one state to another. Let’s compare timing diagrams for a normal D latch versus one that is edge-triggered: In the first timing diagram, the outputs respond to input D whenever the enable (E) input is high, for however long it remains high. When the enable signal falls back to a low state, the circuit remains latched. In the second timing diagram, we note a distinctly different response in the circuit output(s): it only responds to the D input during that brief moment of time when the enable signal changes, or transitions, from low to high. This is known as positive edge-triggering. There is such a thing as negative edge triggering as well, and it produces the following response to the same input signals: Whenever we enable a multivibrator circuit on the transitional edge of a square-wave enable signal, we call it a flip-flop instead of a latch. Consequently, and edge-triggered S-R circuit is more properly known as an S-R flip-flop, and an edge-triggered D circuit as a D flip-flop. The enable signal is renamed to be the clocksignal. Also, we refer to the data inputs (S, R, and D, respectively) of these flip-flops as synchronous inputs, because they have effect only at the time of the clock pulse edge (transition), thereby synchronizing any output changes with that clock pulse, rather than at the whim of the data inputs. But, how do we actually accomplish this edge-triggering? To create a “gated” S-R latch from a regular S-R latch is easy enough with a couple of AND gates, but how do we implement logic that only pays attention to the rising or falling edge of a changing digital signal? What we need is a digital circuit that outputs a brief pulse whenever the input is activated for an arbitrary period of time, and we can use the output of this circuit to briefly enable the latch. We’re getting a little ahead of ourselves here, but this is actually a kind of monostable multivibrator, which for now we’ll call a pulse detector. The duration of each output pulse is set by components in the pulse circuit itself. In ladder logic, this can be accomplished quite easily through the use of a time-delay relay with a very short delay time: Implementing this timing function with semiconductor components is actually quite easy, as it exploits the inherent time delay within every logic gate (known as propagation delay). What we do is take an input signal and split it up two ways, then place a gate or a series of gates in one of those signal paths just to delay it a bit, then have both the original signal and its delayed counterpart enter into a two-input gate that outputs a high signal for the brief moment of time that the delayed signal has not yet caught up to the low-to-high change in the non-delayed signal. An example circuit for producing a clock pulse on a low-to-high input signal transition is shown here: This circuit may be converted into a negative-edge pulse detector circuit with only a change of the final gate from AND to NOR: Now that we know how a pulse detector can be made, we can show it attached to the enable input of a latch to turn it into a flip-flop. In this case, the circuit is a S-R flip-flop: Only when the clock signal (C) is transitioning from low to high is the circuit responsive to the S and R inputs. For any other condition of the clock signal (“x”) the circuit will be latched. A ladder logic version of the S-R flip-flop is shown here: Relay contact CR3 in the ladder diagram takes the place of the old E contact in the S-R latch circuit and is closed only during the short time that both C is closed and time-delay contact TR1 is closed. In either case (gate or ladder circuit), we see that the inputs S and R have no effect unless C is transitioning from a low (0) to a high (1) state. Otherwise, the flip-flop’s outputs latch in their previous states. It is important to note that the invalid state for the S-R flip-flop is maintained only for the short period of time that the pulse detector circuit allows the latch to be enabled. After that brief time period has elapsed, the outputs will latch into either the set or the reset state. Once again, the problem of a race condition manifests itself. With no enable signal, an invalid output state cannot be maintained. However, the valid “latched” states of the multivibrator—set and reset—are mutually exclusive to one another. Therefore, the two gates of the multivibrator circuit will “race” each other for supremacy, and whichever one attains a high output state first will “win.” The block symbols for flip-flops are slightly different from that of their respective latch counterparts: The triangle symbol next to the clock inputs tells us that these are edge-triggered devices, and consequently that these are flip-flops rather than latches. The symbols above are positive edge-triggered: that is, they “clock” on the rising edge (low-to-high transition) of the clock signal. Negative edge-triggered devices are symbolized with a bubble on the clock input line: Both of the above flip-flops will “clock” on the falling edge (high-to-low transition) of the clock signal. Review • A flip-flop is a latch circuit with a “pulse detector” circuit connected to the enable (E) input, so that it is enabled only for a brief moment on either the rising or falling edge of a clock pulse. • Pulse detector circuits may be made from time-delay relays for ladder logic applications, or from semiconductor gates (exploiting the phenomenon of propagation delay).
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/10%3A_Multivibrators/10.05%3A_Edge-triggered_Latches-_Flip-Flops.txt
Another variation on a theme of bistable multivibrators is the J-K flip-flop. Essentially, this is a modified version of an S-R flip-flop with no “invalid” or “illegal” output state. Look closely at the following diagram to see how this is accomplished: he J and K Inputs What used to be the S and R inputs are now called the J and K inputs, respectively. The old two-input AND gates have been replaced with 3-input AND gates, and the third input of each gate receives feedback from the Q and not-Q outputs. What this does for us is permit the J input to have effect only when the circuit is reset, and permit the K input to have effect only when the circuit is set. In other words, the two inputs are interlocked, to use a relay logic term, so that they cannot both be activated simultaneously. If the circuit is “set,” the J input is inhibited by the 0 status of not-Q through the lower AND gate; if the circuit is “reset,” the K input is inhibited by the 0 status of Q through the upper AND gate. When both J and K inputs are 1, however, something unique happens. Because of the selective inhibiting action of those 3-input AND gates, a “set” state inhibits input J so that the flip-flop acts as if J=0 while K=1 when in fact both are 1. On the next clock pulse, the outputs will switch (“toggle”) from set (Q=1 and not-Q=0) to reset (Q=0 and not-Q=1). Conversely, a “reset” state inhibits input K so that the flip-flop acts as if J=1 and K=0 when in fact both are 1. The next clock pulse toggles the circuit again from reset to set. Logical Sequence of J-K Flip-Flop See if you can follow this logical sequence with the ladder logic equivalent of the J-K flip-flop: The end result is that the S-R flip-flop’s “invalid” state is eliminated (along with the race condition it engendered) and we get a useful feature as a bonus: the ability to toggle between the two (bistable) output states with every transition of the clock input signal. There is no such thing as a J-K latch, only J-K flip-flops. Without the edge-triggering of the clock input, the circuit would continuously toggle between its two output states when both J and K were held high (1), making it an astable device instead of a bistable device in that circumstance. If we want to preserve bistable operation for all combinations of input states, we must use edge-triggering so that it toggles only when we tell it to, one step (clock pulse) at a time. The Block Symbol for J-K Flip-Flops The block symbol for a J-K flip-flop is a whole lot less frightening than its internal circuitry, and just like the S-R and D flip-flops, J-K flip-flops come in two clock varieties (negative and positive edge-triggered): Review • A J-K flip-flop is nothing more than an S-R flip-flop with an added layer of feedback. This feedback selectively enables one of the two set/reset inputs so that they cannot both carry an active signal to the multivibrator circuit, thus eliminating the invalid condition. • When both J and K inputs are activated, and the clock input is pulsed, the outputs (Q and not-Q) will swap states. That is, the circuit will toggle from a set state to a reset state or vice versa. 10.07: Asynchronous Flip-Flop Inputs The normal data inputs to a flip flop (D, S and R, or J and K) are referred to as synchronous inputs because they have effect on the outputs (Q and not-Q) only in step, or in sync, with the clock signal transitions. These extra inputs that I now bring to your attention are called asynchronous because they can set or reset the flip-flop regardless of the status of the clock signal. Typically, they’re called preset and clear: When the preset input is activated, the flip-flop will be set (Q=1, not-Q=0) regardless of any of the synchronous inputs or the clock. When the clear input is activated, the flip-flop will be reset (Q=0, not-Q=1), regardless of any of the synchronous inputs or the clock. So, what happens if both preset and clear inputs are activated? Surprise, surprise: we get an invalid state on the output, where Q and not-Q go to the same state, the same as our old friend, the S-R latch! Preset and clear inputs find use when multiple flip-flops are ganged together to perform a function on a multi-bit binary word, and a single line is needed to set or reset them all at once. Asynchronous inputs, just like synchronous inputs, can be engineered to be active-high or active-low. If they’re active-low, there will be an inverting bubble at that input lead on the block symbol, just like the negative edge-trigger clock inputs. Sometimes the designations “PRE” and “CLR” will be shown with inversion bars above them, to further denote the negative logic of these inputs: Review • Asynchronous inputs on a flip-flop have control over the outputs (Q and not-Q) regardless of clock input status. • These inputs are called the preset (PRE) and clear (CLR). The preset input drives the flip-flop to a set state while the clear input drives it to a reset state. • It is possible to drive the outputs of a J-K flip-flop to an invalid condition using the asynchronous inputs, because all feedback within the multivibrator circuit is overridden.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/10%3A_Multivibrators/10.06%3A_The_J-K_Flip-Flop.txt
We’ve already seen one example of a monostable multivibrator in use: the pulse detector used within the circuitry of flip-flops, to enable the latch portion for a brief time when the clock input signal transitions from either low to high or high to low. The pulse detector is classified as a monostable multivibrator because it has only one stable state. By stable, I mean a state of output where the device is able to latch or hold to forever, without external prodding. A latch or flip-flop, being a bistable device, can hold in either the “set” or “reset” state for an indefinite period of time. Once its set or reset, it will continue to latch in that state unless prompted to change by an external input. A monostable device, on the other hand, is only able to hold in one particular state indefinitely. Its other state can only be held momentarily when triggered by an external input. A mechanical analogy of a monostable device would be a momentary contact pushbutton switch, which spring-returns to its normal (stable) position when pressure is removed from its button actuator. Likewise, a standard wall (toggle) switch, such as the type used to turn lights on and off in a house, is a bistable device. It can latch in one of two modes: on or off. All monostable multivibrators are timed devices. That is, their unstable output state will hold only for a certain minimum amount of time before returning to its stable state. With semiconductor monostable circuits, this timing function is typically accomplished through the use of resistors and capacitors, making use of the exponential charging rates of RC circuits. A comparator is often used to compare the voltage across the charging (or discharging) capacitor with a steady reference voltage, and the on/off output of the comparator used for a logic signal. With ladder logic, time delays are accomplished with time-delay relays, which can be constructed with semiconductor/RC circuits like that just mentioned, or mechanical delay devices which impede the immediate motion of the relay’s armature. Note the design and operation of the pulse detector circuit in ladder logic: No matter how long the input signal stays high (1), the output remains high for just 1 second of time, then returns to its normal (stable) low state. For some applications, it is necessary to have a monostable device that outputs a longer pulse than the input pulse which triggers it. Consider the following ladder logic circuit: When the input contact closes, TD1 contact immediately closes, and stays closed for 10 seconds after the input contact opens. No matter how short the input pulse is, the output stays high (1) for exactly 10 seconds after the input drops low again. This kind of monostable multivibrator is called a one-shot. More specifically, it is a retriggerable one-shot, because the timing begins after the input drops to a low state, meaning that multiple input pulses within 10 seconds of each other will maintain a continuous high output: One application for a retriggerable one-shot is that of a single mechanical contact debouncer. As you can see from the above timing diagram, the output will remain high despite “bouncing” of the input signal from a mechanical switch. Of course, in a real-life switch debouncer circuit, you’d probably want to use a time delay of much shorter duration than 10 seconds, as you only need to “debounce” pulses that are in the millisecond range. What if we only wanted a 10 second timed pulse output from a relay logic circuit, regardless of how many input pulses we received or how long-lived they may be? In that case, we’d have to couple a pulse-detector circuit to the retriggerable one-shot time delay circuit, like this: Time delay relay TD1 provides an “on” pulse to time delay relay coil TD2 for an arbitrarily short moment (in this circuit, for at least 0.5 second each time the input contact is actuated). As soon as TD2 is energized, the normally-closed, timed-closed TD2 contact in series with it prevents coil TD2 from being re-energized as long as its timing out (10 seconds). This effectively makes it unresponsive to any more actuations of the input switch during that 10 second period. Only after TD2 times out does the normally-closed, timed-closed TD2 contact in series with it allow coil TD2to be energized again. This type of one-shot is called a nonretriggerable one-shot. One-shot multivibrators of both the retriggerable and nonretriggerable variety find wide application in industry for siren actuation and machine sequencing, where an intermittent input signal produces an output signal of a set time. Review • A monostable multivibrator has only one stable output state. The other output state can only be maintained temporarily. • Monostable multivibrators, sometimes called one-shots, come in two basic varieties: retriggerable and nonretriggerable. • One-shot circuits with very short time settings may be used to debounce the “dirty” signals created by mechanical switch contacts.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/10%3A_Multivibrators/10.08%3A_Monostable_Multivibrators.txt
If we examine a four-bit binary count sequence from 0000 to 1111, a definite pattern will be evident in the “oscillations” of the bits between 0 and 1: Note how the least significant bit (LSB) toggles between 0 and 1 for every step in the count sequence, while each succeeding bit toggles at one-half the frequency of the one before it. The most significant bit (MSB) only toggles once during the entire sixteen-step count sequence: at the transition between 7 (0111) and 8 (1000). If we wanted to design a digital circuit to “count” in four-bit binary, all we would have to do is design a series of frequency divider circuits, each circuit dividing the frequency of a square-wave pulse by a factor of 2: J-K flip-flops are ideally suited for this task, because they have the ability to “toggle” their output state at the command of a clock pulse when both J and K inputs are made “high” (1): If we consider the two signals (A and B) in this circuit to represent two bits of a binary number, signal A being the LSB and signal B being the MSB, we see that the count sequence is backward: from 11 to 10 to 01 to 00 and back again to 11. Although it might not be counting in the direction we might have assumed, at least it counts! The following sections explore different types of counter circuits, all made with J-K flip-flops, and all based on the exploitation of that flip-flop’s toggle mode of operation. Review • Binary count sequences follow a pattern of octave frequency division: the frequency of oscillation for each bit, from LSB to MSB, follows a divide-by-two pattern. In other words, the LSB will oscillate at the highest frequency, followed by the next bit at one-half the LSB’s frequency, and the next bit at one-half the frequency of the bit before it, etc. • Circuits may be built that “count” in a binary sequence, using J-K flip-flops set up in the “toggle” mode. 11.02: Asynchronous Counters In the previous section, we saw a circuit using one J-K flip-flop that counted backward in a two-bit binary sequence, from 11 to 10 to 01 to 00. Since it would be desirable to have a circuit that could count forward and not just backward, it would be worthwhile to examine a forward count sequence again and look for more patterns that might indicate how to build such a circuit. Since we know that binary count sequences follow a pattern of octave (factor of 2) frequency division, and that J-K flip-flop multivibrators set up for the “toggle” mode are capable of performing this type of frequency division, we can envision a circuit made up of several J-K flip-flops, cascaded to produce four bits of output. The main problem facing us is to determine how to connect these flip-flops together so that they toggle at the right times to produce the proper binary sequence. Examine the following binary count sequence, paying attention to patterns preceding the “toggling” of a bit between 0 and 1: Note that each bit in this four-bit sequence toggles when the bit before it (the bit having a lesser significance, or place-weight), toggles in a particular direction: from 1 to 0. Small arrows indicate those points in the sequence where a bit toggles, the head of the arrow pointing to the previous bit transitioning from a “high” (1) state to a “low” (0) state: Starting with four J-K flip-flops connected in such a way to always be in the “toggle” mode, we need to determine how to connect the clock inputs in such a way so that each succeeding bit toggles when the bit before it transitions from 1 to 0. The Q outputs of each flip-flop will serve as the respective binary bits of the final, four-bit count: If we used flip-flops with negative-edge triggering (bubble symbols on the clock inputs), we could simply connect the clock input of each flip-flop to the Q output of the flip-flop before it, so that when the bit before it changes from a 1 to a 0, the “falling edge” of that signal would “clock” the next flip-flop to toggle the next bit: This circuit would yield the following output waveforms, when “clocked” by a repetitive source of pulses from an oscillator: The first flip-flop (the one with the Q0 output), has a positive-edge triggered clock input, so it toggles with each rising edge of the clock signal. Notice how the clock signal in this example has a duty cycle less than 50%. I’ve shown the signal in this manner for the purpose of demonstrating how the clock signal need not be symmetrical to obtain reliable, “clean” output bits in our four-bit binary sequence. In the very first flip-flop circuit shown in this chapter, I used the clock signal itself as one of the output bits. This is a bad practice in counter design, though, because it necessitates the use of a square wave signal with a 50% duty cycle (“high” time = “low” time) in order to obtain a count sequence where each and every step pauses for the same amount of time. Using one J-K flip-flop for each output bit, however, relieves us of the necessity of having a symmetrical clock signal, allowing the use of practically any variety of high/low waveform to increment the count sequence. As indicated by all the other arrows in the pulse diagram, each succeeding output bit is toggled by the action of the preceding bit transitioning from “high” (1) to “low” (0). This is the pattern necessary to generate an “up” count sequence. A less obvious solution for generating an “up” sequence using positive-edge triggered flip-flops is to “clock” each flip-flop using the Q’ output of the preceding flip-flop rather than the Q output. Since the Q’ output will always be the exact opposite state of the Q output on a J-K flip-flop (no invalid states with this type of flip-flop), a high-to-low transition on the Q output will be accompanied by a low-to-high transition on the Q’ output. In other words, each time the Q output of a flip-flop transitions from 1 to 0, the Q’ output of the same flip-flop will transition from 0 to 1, providing the positive-going clock pulse we would need to toggle a positive-edge triggered flip-flop at the right moment: One way we could expand the capabilities of either of these two counter circuits is to regard the Q’ outputs as another set of four binary bits. If we examine the pulse diagram for such a circuit, we see that the Q’ outputs generate a down-counting sequence, while the Q outputs generate an up-counting sequence: Unfortunately, all of the counter circuits shown thusfar share a common problem: the ripple effect. This effect is seen in certain types of binary adder and data conversion circuits, and is due to accumulative propagation delays between cascaded gates. When the Q output of a flip-flop transitions from 1 to 0, it commands the next flip-flop to toggle. If the next flip-flop toggle is a transition from 1 to 0, it will command the flip-flop after it to toggle as well, and so on. However, since there is always some small amount of propagation delay between the command to toggle (the clock pulse) and the actual toggle response (Q and Q’ outputs changing states), any subsequent flip-flops to be toggled will toggle some time after the first flip-flop has toggled. Thus, when multiple bits toggle in a binary count sequence, they will not all toggle at exactly the same time: As you can see, the more bits that toggle with a given clock pulse, the more severe the accumulated delay time from LSB to MSB. When a clock pulse occurs at such a transition point (say, on the transition from 0111 to 1000), the output bits will “ripple” in sequence from LSB to MSB, as each succeeding bit toggles and commands the next bit to toggle as well, with a small amount of propagation delay between each bit toggle. If we take a close-up look at this effect during the transition from 0111 to 1000, we can see that there will be false output counts generated in the brief time period that the “ripple” effect takes place: Instead of cleanly transitioning from a “0111” output to a “1000” output, the counter circuit will very quickly ripple from 0111 to 0110 to 0100 to 0000 to 1000, or from 7 to 6 to 4 to 0 and then to 8. This behavior earns the counter circuit the name of ripple counter, or asynchronous counter. In many applications, this effect is tolerable, since the ripple happens very, very quickly (the width of the delays has been exaggerated here as an aid to understanding the effects). If all we wanted to do was drive a set of light-emitting diodes (LEDs) with the counter’s outputs, for example, this brief ripple would be of no consequence at all. However, if we wished to use this counter to drive the “select” inputs of a multiplexer, index a memory pointer in a microprocessor (computer) circuit, or perform some other task where false outputs could cause spurious errors, it would not be acceptable. There is a way to use this type of counter circuit in applications sensitive to false, ripple-generated outputs, and it involves a principle known as strobing. Most decoder and multiplexer circuits are equipped with at least one input called the “enable.” The output(s) of such a circuit will be active only when the enable input is made active. We can use this enable input to strobe the circuit receiving the ripple counter’s output so that it is disabled (and thus not responding to the counter output) during the brief period of time in which the counter outputs might be rippling, and enabled only when sufficient time has passed since the last clock pulse that all rippling will have ceased. In most cases, the strobing signal can be the same clock pulse that drives the counter circuit: With an active-low Enable input, the receiving circuit will respond to the binary count of the four-bit counter circuit only when the clock signal is “low.” As soon as the clock pulse goes “high,” the receiving circuit stops responding to the counter circuit’s output. Since the counter circuit is positive-edge triggered (as determined by the first flip-flop clock input), all the counting action takes place on the low-to-high transition of the clock signal, meaning that the receiving circuit will become disabled just before any toggling occurs on the counter circuit’s four output bits. The receiving circuit will not become enabled until the clock signal returns to a low state, which should be a long enough time after all rippling has ceased to be “safe” to allow the new count to have effect on the receiving circuit. The crucial parameter here is the clock signal’s “high” time: it must be at least as long as the maximum expected ripple period of the counter circuit. If not, the clock signal will prematurely enable the receiving circuit, while some rippling is still taking place. Another disadvantage of the asynchronous, or ripple, counter circuit is limited speed. While all gate circuits are limited in terms of maximum signal frequency, the design of asynchronous counter circuits compounds this problem by making propagation delays additive. Thus, even if strobing is used in the receiving circuit, an asynchronous counter circuit cannot be clocked at any frequency higher than that which allows the greatest possible accumulated propagation delay to elapse well before the next pulse. The solution to this problem is a counter circuit that avoids ripple altogether. Such a counter circuit would eliminate the need to design a “strobing” feature into whatever digital circuits use the counter output as an input, and would also enjoy a much greater operating speed than its asynchronous equivalent. This design of counter circuit is the subject of the next section. Review • An “up” counter may be made by connecting the clock inputs of positive-edge triggered J-K flip-flops to the Q’ outputs of the preceding flip-flops. Another way is to use negative-edge triggered flip-flops, connecting the clock inputs to the Q outputs of the preceding flip-flops. In either case, the J and K inputs of all flip-flops are connected to Vcc or Vdd so as to always be “high.” • Counter circuits made from cascaded J-K flip-flops where each clock input receives its pulses from the output of the previous flip-flop invariably exhibit a ripple effect, where false output counts are generated between some steps of the count sequence. These types of counter circuits are called asynchronous counters, or ripple counters. • Strobing is a technique applied to circuits receiving the output of an asynchronous (ripple) counter, so that the false counts generated during the ripple time will have no ill effect. Essentially, the enable input of such a circuit is connected to the counter’s clock pulse in such a way that it is enabled only when the counter outputs are not changing, and will be disabled during those periods of changing counter outputs where ripple occurs.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/11%3A_Sequential_Circuits/11.01%3A_Binary_Count_Sequence.txt
What is a Synchronous Counter? A synchronous counter, in contrast to an asynchronous counter, is one whose output bits change state simultaneously, with no ripple. The only way we can build such a counter circuit from J-K flip-flops is to connect all the clock inputs together, so that each and every flip-flop receives the exact same clock pulse at the exact same time: Now, the question is, what do we do with the J and K inputs? We know that we still have to maintain the same divide-by-two frequency pattern in order to count in a binary sequence, and that this pattern is best achieved utilizing the “toggle” mode of the flip-flop, so the fact that the J and K inputs must both be (at times) “high” is clear. However, if we simply connect all the J and K inputs to the positive rail of the power supply as we did in the asynchronous circuit, this would clearly not work because all the flip-flops would toggle at the same time: with each and every clock pulse! Let’s examine the four-bit binary counting sequence again, and see if there are any other patterns that predict the toggling of a bit. Asynchronous counter circuit design is based on the fact that each bit toggle happens at the same time that the preceding bit toggles from a “high” to a “low” (from 1 to 0). Since we cannot clock the toggling of a bit based on the toggling of a previous bit in a synchronous counter circuit (to do so would create a ripple effect) we must find some other pattern in the counting sequence that can be used to trigger a bit toggle: Examining the four-bit binary count sequence, another predictive pattern can be seen. Notice that just before a bit toggles, all preceding bits are “high:” This pattern is also something we can exploit in designing a counter circuit. Synchronous “Up” Counter If we enable each J-K flip-flop to toggle based on whether or not all preceding flip-flop outputs (Q) are “high,” we can obtain the same counting sequence as the asynchronous circuit without the ripple effect, since each flip-flop in this circuit will be clocked at exactly the same time: The result is a four-bit synchronous “up” counter. Each of the higher-order flip-flops are made ready to toggle (both J and K inputs “high”) if the Q outputs of all previous flip-flops are “high.” Otherwise, the J and K inputs for that flip-flop will both be “low,” placing it into the “latch” mode where it will maintain its present output state at the next clock pulse. Since the first (LSB) flip-flop needs to toggle at every clock pulse, its J and K inputs are connected to Vcc or Vdd, where they will be “high” all the time. The next flip-flop need only “recognize” that the first flip-flop’s Q output is high to be made ready to toggle, so no AND gate is needed. However, the remaining flip-flops should be made ready to toggle only when all lower-order output bits are “high,” thus the need for AND gates. Synchronous “Down” Counter To make a synchronous “down” counter, we need to build the circuit to recognize the appropriate bit patterns predicting each toggle state while counting down. Not surprisingly, when we examine the four-bit binary count sequence, we see that all preceding bits are “low” prior to a toggle (following the sequence from bottom to top): Since each J-K flip-flop comes equipped with a Q’ output as well as a Q output, we can use the Q’ outputs to enable the toggle mode on each succeeding flip-flop, being that each Q’ will be “high” every time that the respective Q is “low:” Counter Circuit with Selectable “up” and “down” Count Modes Taking this idea one step further, we can build a counter circuit with selectable between “up” and “down” count modes by having dual lines of AND gates detecting the appropriate bit conditions for an “up” and a “down” counting sequence, respectively, then use OR gates to combine the AND gate outputs to the J and K inputs of each succeeding flip-flop: This circuit isn’t as complex as it might first appear. The Up/Down control input line simply enables either the upper string or lower string of AND gates to pass the Q/Q’ outputs to the succeeding stages of flip-flops. If the Up/Down control line is “high,” the top AND gates become enabled, and the circuit functions exactly the same as the first (“up”) synchronous counter circuit shown in this section. If the Up/Down control line is made “low,” the bottom AND gates become enabled, and the circuit functions identically to the second (“down” counter) circuit shown in this section. To illustrate, here is a diagram showing the circuit in the “up” counting mode (all disabled circuitry shown in grey rather than black): Here, shown in the “down” counting mode, with the same grey coloring representing disabled circuitry: Up/down counter circuits are very useful devices. A common application is in machine motion control, where devices called rotary shaft encoders convert mechanical rotation into a series of electrical pulses, these pulses “clocking” a counter circuit to track total motion: As the machine moves, it turns the encoder shaft, making and breaking the light beam between LED and phototransistor, thereby generating clock pulses to increment the counter circuit. Thus, the counter integrates, or accumulates, total motion of the shaft, serving as an electronic indication of how far the machine has moved. If all we care about is tracking total motion, and do not care to account for changes in the direction of motion, this arrangement will suffice. However, if we wish the counter to increment with one direction of motion and decrement with the reverse direction of motion, we must use an up/down counter, and an encoder/decoding circuit having the ability to discriminate between different directions. If we re-design the encoder to have two sets of LED/photo transistor pairs, those pairs aligned such that their square-wave output signals are 90o out of phase with each other, we have what is known as a quadrature output encoder (the word “quadrature” simply refers to a 90o angular separation). A phase detection circuit may be made from a D-type flip-flop, to distinguish a clockwise pulse sequence from a counter-clockwise pulse sequence: When the encoder rotates clockwise, the “D” input signal square-wave will lead the “C” input square-wave, meaning that the “D” input will already be “high” when the “C” transitions from “low” to “high,” thus settingthe D-type flip-flop (making the Q output “high”) with every clock pulse. A “high” Q output places the counter into the “Up” count mode, and any clock pulses received by the clock from the encoder (from either LED) will increment it. Conversely, when the encoder reverses rotation, the “D” input will lag behind the “C” input waveform, meaning that it will be “low” when the “C” waveform transitions from “low” to “high,” forcing the D-type flip-flop into the reset state (making the Q output “low”) with every clock pulse. This “low” signal commands the counter circuit to decrement with every clock pulse from the encoder. This circuit, or something very much like it, is at the heart of every position-measuring circuit based on a pulse encoder sensor. Such applications are very common in robotics, CNC machine tool control, and other applications involving the measurement of reversible, mechanical motion.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/11%3A_Sequential_Circuits/11.03%3A_Synchronous_Counters.txt
Up to now, every circuit that was presented was a combinatorial circuit. That means that its output is dependent only by its current inputs. Previous inputs for that type of circuits have no effect on the output. However, there are many applications where there is a need for our circuits to have “memory”; to remember previous inputs and calculate their outputs according to them. A circuit whose output depends not only on the present input but also on the history of the input is called a sequential circuit. In this section we will learn how to design and build such sequential circuits. In order to see how this procedure works, we will use an example, on which we will study our topic. So let’s suppose we have a digital quiz game that works on a clock and reads an input from a manual button. However, we want the switch to transmit only one HIGH pulse to the circuit. If we hook the button directly on the game circuit it will transmit HIGH for as few clock cycles as our finger can achieve. On a common clock frequency our finger can never be fast enough. The design procedure has specific steps that must be followed in order to get the work done: Step 1 The first step of the design procedure is to define with simple but clear words what we want our circuit to do: “Our mission is to design a secondary circuit that will transmit a HIGH pulse with duration of only one cycle when the manual button is pressed, and won’t transmit another pulse until the button is depressed and pressed again.” Step 2 The next step is to design a State Diagram. This is a diagram that is made from circles and arrows and describes visually the operation of our circuit. In mathematic terms, this diagram that describes the operation of our sequential circuit is a Finite State Machine. Make a note that this is a Moore Finite State Machine. Its output is a function of only its current state, not its input. That is in contrast with the Mealy Finite State Machine, where input affects the output. In this tutorial, only the Moore Finite State Machine will be examined. The State Diagram of our circuit is the following: (Figure below) A State Diagram Every circle represents a “state”, a well-defined condition that our machine can be found at. In the upper half of the circle we describe that condition. The description helps us remember what our circuit is supposed to do at that condition. • The first circle is the “stand-by” condition. This is where our circuit starts from and where it waits for another button press. • The second circle is the condition where the button has just been just pressed and our circuit needs to transmit a HIGH pulse. • The third circle is the condition where our circuit waits for the button to be released before it returns to the “stand-by” condition. In the lower part of the circle is the output of our circuit. If we want our circuit to transmit a HIGH on a specific state, we put a 1 on that state. Otherwise we put a 0. Every arrow represents a “transition” from one state to another. A transition happens once every clock cycle. Depending on the current Input, we may go to a different state each time. Notice the number in the middle of every arrow. This is the current Input. For example, when we are in the “Initial-Stand by” state and we “read” a 1, the diagram tells us that we have to go to the “Activate Pulse” state. If we read a 0 we must stay on the “Initial-Stand by” state. So, what does our “Machine” do exactly? It starts from the “Initial - Stand by” state and waits until a 1 is read at the Input. Then it goes to the “Activate Pulse” state and transmits a HIGH pulse on its output. If the button keeps being pressed, the circuit goes to the third state, the “Wait Loop”. There it waits until the button is released (Input goes 0) while transmitting a LOW on the output. Then it’s all over again! This is possibly the most difficult part of the design procedure, because it cannot be described by simple steps. It takes exprerience and a bit of sharp thinking in order to set up a State Diagram, but the rest is just a set of predetermined steps. Step 3 Next, we replace the words that describe the different states of the diagram with binary numbers. We start the enumeration from 0 which is assigned on the initial state. We then continue the enumeration with any state we like, until all states have their number. The result looks something like this: (Figure below) A State Diagram with Coded States Step 4 Afterwards, we fill the State Table. This table has a very specific form. I will give the table of our example and use it to explain how to fill it in. (Figure below) A State Table The first columns are as many as the bits of the highest number we assigned the State Diagram. If we had 5 states, we would have used up to the number 100, which means we would use 3 columns. For our example, we used up to the number 10, so only 2 columns will be needed. These columns describe the Current Stateof our circuit. To the right of the Current State columns we write the Input Columns. These will be as many as our Input variables. Our example has only one Input. Next, we write the Next State Columns. These are as many as the Current State columns. Finally, we write the Outputs Columns. These are as many as our outputs. Our example has only one output. Since we have built a More Finite State Machine, the output is dependent on only the current input states. This is the reason the outputs column has two 1: to result in an output Boolean function that is independant of input I. Keep on reading for further details. The Current State and Input columns are the Inputs of our table. We fill them in with all the binary numbers from 0 to It is simpler than it sounds fortunately. Usually there will be more rows than the actual States we have created in the State Diagram, but that’s ok. Each row of the Next State columns is filled as follows: We fill it in with the state that we reach when, in the State Diagram, from the Current State of the same row we follow the Input of the same row. If have to fill in a row whose Current State number doesn’t correspond to any actual State in the State Diagram we fill it with Don’t Care terms (X). After all, we don’t care where we can go from a State that doesn’t exist. We wouldn’t be there in the first place! Again it is simpler than it sounds. The outputs column is filled by the output of the corresponding Current State in the State Diagram. The State Table is complete! It describes the behaviour of our circuit as fully as the State Diagram does. Step 5a The next step is to take that theoretical “Machine” and implement it in a circuit. Most often than not, this implementation involves Flip Flops. This guide is dedicated to this kind of implementation and will describe the procedure for both D - Flip Flops as well as JK - Flip Flops. T - Flip Flops will not be included as they are too similar to the two previous cases. The selection of the Flip Flop to use is arbitrary and usually is determined by cost factors. The best choice is to perform both analysis and decide which type of Flip Flop results in minimum number of logic gates and lesser cost. First we will examine how we implement our “Machine” with D-Flip Flops. We will need as many D - Flip Flops as the State columns, 2 in our example. For every Flip Flop we will add one more column in our State table (Figure below) with the name of the Flip Flop’s input, “D” for this case. The column that corresponds to each Flip Flop describes what input we must give the Flip Flop in order to go from the Current State to the Next State. For the D - Flip Flop this is easy: The necessary input is equal to the Next State. In the rows that contain X’s we fill X’s in this column as well. A State Table with D - Flip Flop Excitations Step 5b We can do the same steps with JK - Flip Flops. There are some differences however. A JK - Flip Flop has two inputs, therefore we need to add two columns for each Flip Flop. The content of each cell is dictated by the JK’s excitation table: (Figure below) JK - Flip Flop Excitation Table : This table says that if we want to go from State Q to State Qnext, we need to use the specific input for each terminal. For example, to go from 0 to 1, we need to feed J with 1 and we don’t care which input we feed to terminal K. A State Table with JK - Flip Flop Excitations Step 6 We are in the final stage of our procedure. What remains, is to determine the Boolean functions that produce the inputs of our Flip Flops and the Output. We will extract one Boolean funtion for each Flip Flop input we have. This can be done with a Karnaugh Map. The input variables of this map are the Current State variables as well as the Inputs. That said, the input functions for our D - Flip Flops are the following: (Figure below) Karnaugh Maps for the D - Flip Flop Inputs If we chose to use JK - Flip Flops our functions would be the following: (Figure below) Karnaugh Map for the JK - Flip Flop Input A Karnaugh Map will be used to determine the function of the Output as well: (Figure below) Karnaugh Map for the Output variable Y Step 7 We design our circuit. We place the Flip Flops and use logic gates to form the Boolean functions that we calculated. The gates take input from the output of the Flip Flops and the Input of the circuit. Don’t forget to connect the clock to the Flip Flops! The D - Flip Flop version: (Figure below) The completed D - Flip Flop Sequential Circuit The JK - Flip Flop version: (Figure below) The completed JK - Flip Flop Sequential Circuit This is it! We have successfully designed and constructed a Sequential Circuit. At first it might seem a daunting task, but after practice and repetition the procedure will become trivial. Sequential Circuits can come in handy as control parts of bigger circuits and can perform any sequential logic task that we can think of. The sky is the limit! (or the circuit board, at least) Review • A Sequential Logic function has a “memory” feature and takes into account past inputs in order to decide on the output. • The Finite State Machine is an abstract mathematical model of a sequential logic function. It has finite inputs, outputs and number of states. • FSMs are implemented in real-life circuits through the use of Flip Flops • The implementation procedure needs a specific order of steps (algorithm), in order to be carried out.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/11%3A_Sequential_Circuits/11.05%3A_Finite_State_Machines.txt
Shift registers, like counters, are a form of sequential logic. Sequential logic, unlike combinational logic is not only affected by the present inputs, but also, by the prior history. In other words, sequential logic remembers past events. Shift registers produce a discrete delay of a digital signal or waveform. A waveform synchronized to a clock, a repeating square wave, is delayed by “n” discrete clock times, where “n” is the number of shift register stages. Thus, a four stage shift register delays “data in” by four clocks to “data out”. The stages in a shift register are delay stages, typically type “D” Flip-Flops or type “JK” Flip-flops. Formerly, very long (several hundred stages) shift registers served as digital memory. This obsolete application is reminiscent of the acoustic mercury delay lines used as early computer memory. Serial data transmission, over a distance of meters to kilometers, uses shift registers to convert parallel data to serial form. Serial data communications replaces many slow parallel data wires with a single serial high speed circuit. Serial data over shorter distances of tens of centimeters, uses shift registers to get data into and out of microprocessors. Numerous peripherals, including analog to digital converters, digital to analog converters, display drivers, and memory, use shift registers to reduce the amount of wiring in circuit boards. Some specialized counter circuits actually use shift registers to generate repeating waveforms. Longer shift registers, with the help of feedback generate patterns so long that they look like random noise, pseudo-noise. Basic shift registers are classified by structure according to the following types: • Serial-in/serial-out • Parallel-in/serial-out • Serial-in/parallel-out • Universal parallel-in/parallel-out • Ring counter Above we show a block diagram of a serial-in/serial-out shift register, which is 4-stages long. Data at the input will be delayed by four clock periods from the input to the output of the shift register. Data at “data in”, above, will be present at the Stage A output after the first clock pulse. After the second pulse stage A data is transfered to stage B output, and “data in” is transfered to stage A output. After the third clock, stage C is replaced by stage B; stage B is replaced by stage A; and stage A is replaced by “data in”. After the fourth clock, the data originally present at “data in” is at stage D, “output”. The “first in” data is “first out” as it is shifted from “data in” to “data out”. Data is loaded into all stages at once of a parallel-in/serial-out shift register. The data is then shifted out via “data out” by clock pulses. Since a 4- stage shift register is shown above, four clock pulses are required to shift out all of the data. In the diagram above, stage D data will be present at the “data out” up until the first clock pulse; stage C data will be present at “data out” between the first clock and the second clock pulse; stage B data will be present between the second clock and the third clock; and stage A data will be present between the third and the fourth clock. After the fourth clock pulse and thereafter, successive bits of “data in” should appear at “data out” of the shift register after a delay of four clock pulses. If four switches were connected to DA through DD, the status could be read into a microprocessor using only one data pin and a clock pin. Since adding more switches would require no additional pins, this approach looks attractive for many inputs. Above, four data bits will be shifted in from “data in” by four clock pulses and be available at QA through QDfor driving external circuitry such as LEDs, lamps, relay drivers, and horns. After the first clock, the data at “data in” appears at QA. After the second clock, The old QA data appears at QB; QA receives next data from “data in”. After the third clock, QB data is at QC. After the fourth clock, QCdata is at QD. This stage contains the data first present at “data in”. The shift register should now contain four data bits. A parallel-in/parallel-out shift register combines the function of the parallel-in, serial-out shift register with the function of the serial-in, parallel-out shift register to yield the universal shift register. The “do anything” shifter comes at a price– the increased number of I/O (Input/Output) pins may reduce the number of stages which can be packaged. Data presented at DA through DD is parallel loaded into the registers. This data at QA through QD may be shifted by the number of pulses presented at the clock input. The shifted data is available at QA through QD. The “mode” input, which may be more than one input, controls parallel loading of data from DA through DD, shifting of data, and the direction of shifting. There are shift registers which will shift data either left or right. If the serial output of a shift register is connected to the serial input, data can be perpetually shifted around the ring as long as clock pulses are present. If the output is inverted before being fed back as shown above, we do not have to worry about loading the initial data into the “ring counter”.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/12%3A_Shift_Registers/12.01%3A_Introduction_to_Shift_Registers.txt
Serial-in, serial-out shift registers delay data by one clock time for each stage. They will store a bit of data for each register. A serial-in, serial-out shift register may be one to 64 bits in length, longer if registers or packages are cascaded. Below is a single stage shift register receiving data which is not synchronized to the register clock. The “data in” at the D pin of the type D FF (Flip-Flop) does not change levels when the clock changes for low to high. We may want to synchronize the data to a system wide clock in a circuit board to improve the reliability of a digital logic circuit. The obvious point (as compared to the figure below) illustrated above is that whatever “data in” is present at the D pin of a type D FF is transfered from D to output Q at clock time. Since our example shift register uses positive edge sensitive storage elements, the output Q follows the D input when the clock transitions from low to high as shown by the up arrows on the diagram above. There is no doubt what logic level is present at clock time because the data is stable well before and after the clock edge. This is seldom the case in multi-stage shift registers. But, this was an easy example to start with. We are only concerned with the positive, low to high, clock edge. The falling edge can be ignored. It is very easy to see Q follow D at clock time above. Compare this to the diagram below where the “data in” appears to change with the positive clock edge. Since “data in” appears to changes at clock time t1 above, what does the type D FF see at clock time? The short over simplified answer is that it sees the data that was present at D prior to the clock. That is what is transfered to Q at clock time t1. The correct waveform is QC. At t1 Q goes to a zero if it is not already zero. The D register does not see a one until time t2, at which time Q goes high. Since data, above, present at D is clocked to Q at clock time, and Q cannot change until the next clock time, the D FF delays data by one clock period, provided that the data is already synchronized to the clock. The QA waveform is the same as “data in” with a one clock period delay. A more detailed look at what the input of the type D Flip-Flop sees at clock time follows. Refer to the figure below. Since “data in” appears to changes at clock time (above), we need further information to determine what the D FF sees. If the “data in” is from another shift register stage, another same type D FF, we can draw some conclusions based on data sheet information. Manufacturers of digital logic make available information about their parts in data sheets, formerly only available in a collection called a data book. Data books are still available; though, the manufacturer’s web site is the modern source. The following data was extracted from the CD4006b data sheet for operation at 5VDC, which serves as an example to illustrate timing. • tS=100ns • tH=60ns • tP=200-400ns typ/max tS is the setup time, the time data must be present before clock time. In this case data must be present at D100ns prior to the clock. Furthermore, the data must be held for hold time tH=60ns after clock time. These two conditions must be met to reliably clock data from D to Q of the Flip-Flop. There is no problem meeting the setup time of 60ns as the data at D has been there for the whole previous clock period if it comes from another shift register stage. For example, at a clock frequency of 1 Mhz, the clock period is 1000 µs, plenty of time. Data will actually be present for 1000µs prior to the clock, which is much greater than the minimum required tS of 60ns. The hold time tH=60ns is met because D connected to Q of another stage cannot change any faster than the propagation delay of the previous stage tP=200ns. Hold time is met as long as the propagation delay of the previous D FF is greater than the hold time. Data at D driven by another stage Q will not change any faster than 200ns for the CD4006b. To summarize, output Q follows input D at nearly clock time if Flip-Flops are cascaded into a multi-stage shift register. Three type D Flip-Flops are cascaded Q to D and the clocks paralleled to form a three stage shift register above. Type JK FFs cascaded Q to J, Q’ to K with clocks in parallel to yield an alternate form of the shift register above. A serial-in/serial-out shift register has a clock input, a data input, and a data output from the last stage. In general, the other stage outputs are not available Otherwise, it would be a serial-in, parallel-out shift register.. The waveforms below are applicable to either one of the preceding two versions of the serial-in, serial-out shift register. The three pairs of arrows show that a three stage shift register temporarily stores 3-bits of data and delays it by three clock periods from input to output. At clock time t1 a “data in” of 0 is clocked from D to Q of all three stages. In particular, D of stage A sees a logic 0, which is clocked to QA where it remains until time t2. At clock time t2 a “data in” of 1 is clocked from D to QA. At stages B and C, a 0, fed from preceding stages is clocked to QB and QC. At clock time t3 a “data in” of 0 is clocked from D to QA. QA goes low and stays low for the remaining clocks due to “data in” being 0. QB goes high at t3 due to a 1 from the previous stage. QC is still low after t3 due to a low from the previous stage. QC finally goes high at clock t4 due to the high fed to D from the previous stage QB. All earlier stages have 0s shifted into them. And, after the next clock pulse at t5, all logic 1s will have been shifted out, replaced by 0s Serial-in/serial-out devices We will take a closer look at the following parts available as integrated circuits, courtesy of Texas Instruments. For complete device data sheets follow the links. • CD4006b 18-bit serial-in/ serial-out shift register [*] • CD4031b 64-bit serial-in/ serial-out shift register [*] • CD4517b dual 64-bit serial-in/ serial-out shift register [*] The following serial-in/ serial-out shift registers are 4000 series CMOS (Complementary Metal Oxide Semiconductor) family parts. As such, They will accept a VDD, positive power supply of 3-Volts to 15-Volts. The VSS pin is grounded. The maximum frequency of the shift clock, which varies with VDD, is a few megahertz. See the full data sheet for details. The 18-bit CD4006b consists of two stages of 4-bits and two more stages of 5-bits with a an output tap at 4-bits. Thus, the 5-bit stages could be used as 4-bit shift registers. To get a full 18-bit shift register the output of one shift register must be cascaded to the input of another and so on until all stages create a single shift register as shown below. A CD4031 64-bit serial-in/ serial-out shift register is shown below. A number of pins are not connected (nc). Both Q and Q’ are available from the 64th stage, actually Q64 and Q’64. There is also a Q64 “delayed” from a half stage which is delayed by half a clock cycle. A major feature is a data selector which is at the data input to the shift register. The “mode control” selects between two inputs: data 1 and data 2. If “mode control” is high, data will be selected from “data 2” for input to the shift register. In the case of “mode control” being logic low, the “data 1” is selected. Examples of this are shown in the two figures below. The “data 2” above is wired to the Q64 output of the shift register. With “mode control” high, the Q64 output is routed back to the shifter data input D. Data will recirculate from output to input. The data will repeat every 64 clock pulses as shown above. The question that arises is how did this data pattern get into the shift register in the first place? With “mode control” low, the CD4031 “data 1” is selected for input to the shifter. The output, Q64, is not recirculated because the lower data selector gate is disabled. By disabled we mean that the logic low “mode select” inverted twice to a low at the lower NAND gate prevents it for passing any signal on the lower pin (data 2) to the gate output. Thus, it is disabled. A CD4517b dual 64-bit shift register is shown above. Note the taps at the 16th, 32nd, and 48th stages. That means that shift registers of those lengths can be configured from one of the 64-bit shifters. Of course, the 64-bit shifters may be cascaded to yield an 80-bit, 96-bit, 112-bit, or 128-bit shift register. The clock CLA and CLB need to be paralleled when cascading the two shifters. WEB and WEB are grounded for normal shifting operations. The data inputs to the shift registers A and B are DA and DB respectively. Suppose that we require a 16-bit shift register. Can this be configured with the CD4517b? How about a 64-shift register from the same part? Above we show A CD4517b wired as a 16-bit shift register for section B. The clock for section B is CLB. The data is clocked in at CLB. And the data delayed by 16-clocks is picked of off Q16B. WEB , the write enable, is grounded. Above we also show the same CD4517b wired as a 64-bit shift register for the independent section A. The clock for section A is CLA. The data enters at CLA. The data delayed by 64-clock pulses is picked up from Q64A. WEA, the write enable for section A, is grounded.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/12%3A_Shift_Registers/12.02%3A_Shift_Registers-_Serial-in%2C_Serial-out.txt
Parallel-in/ serial-out shift registers do everything that the previous serial-in/ serial-out shift registers do plus input data to all stages simultaneously. The parallel-in/ serial-out shift register stores data, shifts it on a clock by clock basis, and delays it by the number of stages times the clock period. In addition, parallel-in/ serial-out really means that we can load data in parallel into all stages before any shifting ever begins. This is a way to convert data from a parallel format to a serial format. By parallel format we mean that the data bits are present simultaneously on individual wires, one for each data bit as shown below. By serial format we mean that the data bits are presented sequentially in time on a single wire or circuit as in the case of the “data out” on the block diagram below. Below we take a close look at the internal details of a 3-stage parallel-in/ serial-out shift register. A stage consists of a type D Flip-Flop for storage, and an AND-OR selector to determine whether data will load in parallel, or shift stored data to the right. In general, these elements will be replicated for the number of stages required. We show three stages due to space limitations. Four, eight or sixteen bits is normal for real parts. Above we show the parallel load path when SHIFT/LD’ is logic low. The upper NAND gates serving DA DBDC are enabled, passing data to the D inputs of type D Flip-Flops QA QB DC respectively. At the next positive going clock edge, the data will be clocked from D to Q of the three FFs. Three bits of data will load into QAQB DC at the same time. The type of parallel load just described, where the data loads on a clock pulse is known as synchronous load because the loading of data is synchronized to the clock. This needs to be differentiated from asynchronous load where loading is controlled by the preset and clear pins of the Flip-Flops which does not require the clock. Only one of these load methods is used within an individual device, the synchronous load being more common in newer devices. The shift path is shown above when SHIFT/LD’ is logic high. The lower AND gates of the pairs feeding the OR gate are enabled giving us a shift register connection of SI to DA , QA to DB , QB to DC , QC to SO. Clock pulses will cause data to be right shifted out to SO on successive pulses. The waveforms below show both parallel loading of three bits of data and serial shifting of this data. Parallel data at DA DB DC is converted to serial data at SO. What we previously described with words for parallel loading and shifting is now set down as waveforms above. As an example we present 101 to the parallel inputs DAA DBB DCC. Next, the SHIFT/LD’ goes low enabling loading of data as opposed to shifting of data. It needs to be low a short time before and after the clock pulse due to setup and hold requirements. It is considerably wider than it has to be. Though, with synchronous logic it is convenient to make it wide. We could have made the active low SHIFT/LD’ almost two clocks wide, low almost a clock before t1 and back high just before t3. The important factor is that it needs to be low around clock time t1 to enable parallel loading of the data by the clock. Note that at t1 the data 101 at DA DB DC is clocked from D to Q of the Flip-Flops as shown at QA QB QC at time t1. This is the parallel loading of the data synchronous with the clock. Now that the data is loaded, we may shift it provided that SHIFT/LD’ is high to enable shifting, which it is prior to t2. At t2 the data 0 at QC is shifted out of SO which is the same as the QC waveform. It is either shifted into another integrated circuit, or lost if there is nothing connected to SO. The data at QB, a 0 is shifted to QC. The 1 at QA is shifted into QB. With “data in” a 0, QA becomes 0. After t2, QA QB QC = 010. After t3, QA QB QC = 001. This 1, which was originally present at QA after t1, is now present at SO and QC. The last data bit is shifted out to an external integrated circuit if it exists. After t4 all data from the parallel load is gone. At clock t5 we show the shifting in of a data 1 present on the SI, serial input. Why provide SI and SO pins on a shift register? These connections allow us to cascade shift register stages to provide large shifters than available in a single IC (Integrated Circuit) package. They also allow serial connections to and from other ICs like microprocessors. Let’s take a closer look at parallel-in/ serial-out shift registers available as integrated circuits, courtesy of Texas Instruments. For complete device data sheets follow these the links. Parallel-in/serial-out devices • SN74ALS166 parallel-in/ serial-out 8-bit shift register, synchronous load - example • SN74ALS165 parallel-in/ serial-out 8-bit shift register, asynchronous load - example • CD4014B parallel-in/ serial-out 8-bit shift register, synchronous load - example • SN74LS647 parallel-in/ serial-out 16-bit shift register, synchronous load - example The SN74ALS166 shown above is the closest match of an actual part to the previous parallel-in/ serial out shifter figures. Let us note the minor changes to our figure above. First of all, there are 8-stages. We only show three. All 8-stages are shown on the data sheet available at the link above. The manufacturer labels the data inputs A, B, C, and so on to H. The SHIFT/LOAD control is called SH/LD’. It is abbreviated from our previous terminology, but works the same: parallel load if low, shift if high. The shift input (serial data in) is SER on the ALS166 instead of SI. The clock CLK is controlled by an inhibit signal, CLKINH. If CLKINH is high, the clock is inhibited, or disabled. Otherwise, this “real part” is the same as what we have looked at in detail. Above is the ANSI (American National Standards Institute) symbol for the SN74ALS166 as provided on the data sheet. Once we know how the part operates, it is convenient to hide the details within a symbol. There are many general forms of symbols. The advantage of the ANSI symbol is that the labels provide hints about how the part operates. The large notched block at the top of the ‘74ASL166 is the control section of the ANSI symbol. There is a reset indicted by R. There are three control signals: M1 (Shift), M2 (Load), and C3/1 (arrow) (inhibited clock). The clock has two functions. First, C3 for shifting parallel data wherever a prefix of 3 appears. Second, whenever M1 is asserted, as indicated by the 1 of C3/1 (arrow), the data is shifted as indicated by the right pointing arrow. The slash (/) is a separator between these two functions. The 8-shift stages, as indicated by title SRG8, are identified by the external inputs A, B, C, to H. The internal 2, 3D indicates that data, D, is controlled by M2 [Load] and C3 clock. In this case, we can conclude that the parallel data is loaded synchronously with the clock C3. The upper stage at A is a wider block than the others to accommodate the input SER. The legend 1, 3D implies that SER is controlled by M1 [Shift] and C3 clock. Thus, we expect to clock in data at SER when shifting as opposed to parallel loading. The ANSI/IEEE basic gate rectangular symbols are provided above for comparison to the more familiar shape symbols so that we may decipher the meaning of the symbology associated with the CLKINH and CLKpins on the previous ANSI SN74ALS166 symbol. The CLK and CLKINH feed an OR gate on the SN74ALS166 ANSI symbol. OR is indicated by => on the rectangular inset symbol. The long triangle at the output indicates a clock. If there was a bubble with the arrow this would have indicated shift on negative clock edge (high to low). Since there is no bubble with the clock arrow, the register shifts on the positive (low to high transition) clock edge. The long arrow, after the legend C3/1 pointing right indicates shift right, which is down the symbol. Part of the internal logic of the SN74ALS165 parallel-in/ serial-out, asynchronous load shift register is reproduced from the data sheet above. See the link at the beginning of this section the for the full diagram. We have not looked at asynchronous loading of data up to this point. First of all, the loading is accomplished by application of appropriate signals to the Set (preset) and Reset (clear) inputs of the Flip-Flops. The upper NANDgates feed the Set pins of the FFs and also cascades into the lower NAND gate feeding the Reset pins of the FFs. The lower NAND gate inverts the signal in going from the Set pin to the Reset pin. First, SH/LD’ must be pulled Low to enable the upper and lower NAND gates. If SH/LD’ were at a logic high instead, the inverter feeding a logic low to all NAND gates would force a High out, releasing the “active low” Set and Reset pins of all FFs. There would be no possibility of loading the FFs. With SH/LD’ held Low, we can feed, for example, a data 1 to parallel input A, which inverts to a zero at the upper NAND gate output, setting FF QA to a 1. The 0 at the Set pin is fed to the lower NAND gate where it is inverted to a 1 , releasing the Reset pin of QA. Thus, a data A=1 sets QA=1. Since none of this required the clock, the loading is asynchronous with respect to the clock. We use an asynchronous loading shift register if we cannot wait for a clock to parallel load data, or if it is inconvenient to generate a single clock pulse. The only difference in feeding a data 0 to parallel input A is that it inverts to a 1 out of the upper gate releasing Set. This 1 at Set is inverted to a 0 at the lower gate, pulling Reset to a Low, which resets QA=0. The ANSI symbol for the SN74ALS166 above has two internal controls C1 [LOAD] and C2 clock from the OR function of (CLKINH, CLK). SRG8 says 8-stage shifter. The arrow after C2 indicates shifting right or down. SER input is a function of the clock as indicated by internal label 2D. The parallel data inputs A, B, Cto H are a function of C1 [LOAD], indicated by internal label 1D. C1 is asserted when sh/LD’ =0 due to the half-arrow inverter at the input. Compare this to the control of the parallel data inputs by the clock of the previous synchronous ANSI SN75ALS166. Note the differences in the ANSI Data labels. On the CD4014B above, M1 is asserted when LD/SH’=0. M2 is asserted when LD/SH’=1. Clock C3/1 is used for parallel loading data at 2, 3D when M2 is active as indicated by the 2,3 prefix labels. Pins P3 to P7are understood to have the smae internal 2,3 prefix labels as P2 and P8. At SER, the 1,3D prefix implies that M1 and clock C3 are necessary to input serial data. Right shifting takes place when M1 active is as indicated by the 1 in C3/1 arrow. The CD4021B is a similar part except for asynchronous parallel loading of data as implied by the lack of any 2 prefix in the data label 1D for pins P1, P2, to P8. Of course, prefix 2 in label 2D at input SER says that data is clocked into this pin. The OR gate inset shows that the clock is controlled by LD/SH’. The above SN74LS674 internal label SRG 16 indicates 16-bit shift register. The MODE input to the control section at the top of the symbol is labeled 1,2 M3. Internal M3 is a function of input MODE and G1 and G2as indicated by the 1,2 preceding M3. The base label G indicates an AND function of any such G inputs. Input R/W’ is internally labeled G1/2 EN. This is an enable EN (controlled by G1 AND G2) for tristate devices used elsewhere in the symbol. We note that CS’ on (pin 1) is internal G2. Chip select CS’ also is ANDed with the input CLK to give internal clock C4. The bubble within the clock arrow indicates that activity is on the negative (high to low transition) clock edge. The slash (/) is a separator implying two functions for the clock. Before the slash, C4 indicates control of anything with a prefix of 4. After the slash, the 3’ (arrow)indicates shifting. The 3’ of C4/3’ implies shifting when M3 is de-asserted (MODE=0). The long arrow indicates shift right (down). Moving down below the control section to the data section, we have external inputs P0-P15, pins (7-11, 13-23). The prefix 3,4 of internal label 3,4D indicates that M3 and the clock C4 control loading of parallel data. The D stands for Data. This label is assumed to apply to all the parallel inputs, though not explicitly written out. Locate the label 3’,4D on the right of the P0 (pin7) stage. The complemented-3 indicates thatM3=MODE=0 inputs (shifts) SER/Q15 (pin5) at clock time, (4 of 3’,4D) corresponding to clock C4. In other words, with MODE=0, we shift data into Q0 from the serial input (pin 6). All other stages shift right (down) at clock time. Moving to the bottom of the symbol, the triangle pointing right indicates a buffer between Q and the output pin. The Triangle pointing down indicates a tri-state device. We previously stated that the tristate is controlled by enable EN, which is actually G1 AND G2 from the control section. If R/W=0, the tri-state is disabled, and we can shift data into Q0 via SER (pin 6), a detail we omitted above. We actually need MODE=0, R/W’=0, CS’=0 The internal logic of the SN74LS674 and a table summarizing the operation of the control signals is available in the link in the bullet list, top of section. If R/W’=1, the tristate is enabled, Q15 shifts out SER/Q15 (pin 6) and recirculates to the Q0 stage via the right hand wire to 3’,4D. We have assumed that CS’ was low giving us clock C4/3’ and G2 to ENable the tri-state. Practical Applications An application of a parallel-in/ serial-out shift register is to read data into a microprocessor. The Alarm above is controlled by a remote keypad. The alarm box supplies +5V and ground to the remote keypad to power it. The alarm reads the remote keypad every few tens of milliseconds by sending shift clocks to the keypad which returns serial data showing the status of the keys via a parallel-in/ serial-out shift register. Thus, we read nine key switches with four wires. How many wires would be required if we had to run a circuit for each of the nine keys? A practical application of a parallel-in/ serial-out shift register is to read many switch closures into a microprocessor on just a few pins. Some low end microprocessors only have 6-I/O (Input/Output) pins available on an 8-pin package. Or, we may have used most of the pins on an 84-pin package. We may want to reduce the number of wires running around a circuit board, machine, vehicle, or building. This will increase the reliability of our system. It has been reported that manufacturers who have reduced the number of wires in an automobile produce a more reliable product. In any event, only three microprocessor pins are required to read in 8-bits of data from the switches in the figure above. We have chosen an asynchronous loading device, the CD4021B because it is easier to control the loading of data without having to generate a single parallel load clock. The parallel data inputs of the shift register are pulled up to +5V with a resistor on each input. If all switches are open, all 1s will be loaded into the shift register when the microprocessor moves the LD/SH’ line from low to high, then back low in anticipation of shifting. Any switch closures will apply logic 0s to the corresponding parallel inputs. The data pattern at P1-P7 will be parallel loaded by the LD/SH’=1 generated by the microprocessor software. The microprocessor generates shift pulses and reads a data bit for each of the 8-bits. This process may be performed totally with software, or larger microprocessors may have one or more serial interfaces to do the task more quickly with hardware. With LD/SH’=0, the microprocessor generates a 0 to 1 transition on the Shift clock line, then reads a data bit on the Serial data in line. This is repeated for all 8-bits. The SER line of the shift register may be driven by another identical CD4021B circuit if more switch contacts need to be read. In which case, the microprocessor generates 16-shift pulses. More likely, it will be driven by something else compatible with this serial data format, for example, an analog to digital converter, a temperature sensor, a keyboard scanner, a serial read-only memory. As for the switch closures, they may be limit switches on the carriage of a machine, an over-temperature sensor, a magnetic reed switch, a door or window switch, an air or water pressure switch, or a solid state optical interrupter.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/12%3A_Shift_Registers/12.03%3A_Shift_Registers-_Parallel-in%2C_Serial-out_%28PISO%29_Conversion.txt
A serial-in, parallel-out shift register is similar to the serial-in, serial-out shift register in that it shifts data into internal storage elements and shifts data out at the serial-out, data-out, pin. It is different in that it makes all the internal stages available as outputs. Therefore, a serial-in, parallel-out shift register converts data from serial format to parallel format. An Example of Using Serial-in, Parallel-out Shift Register If four data bits are shifted in by four clock pulses via a single wire at data-in, below, the data becomes available simultaneously on the four Outputs QA to QD after the fourth clock pulse. The practical application of the serial-in, parallel-out shift register is to convert data from serial format on a single wire to parallel format on multiple wires. Let’s illuminate four LEDs (light emitting diodes) with the four outputs (QA QB QC QD ). The above details of the serial-in, parallel-out shift register are fairly simple. It looks like a serial-in, serial-out shift register with taps added to each stage output. Serial data shifts in at SI (Serial Input). After a number of clocks equal to the number of stages, the first data bit in appears at SO (QD) in the above figure. In general, there is no SO pin. The last stage (QD above) serves as SO and is cascaded to the next package if it exists. Serial-in, Parallel-out vs. Serial-in, Serial-out Shift Register If a serial-in, parallel-out shift register is so similar to a serial-in, serial-out shift register, why do manufacturers bother to offer both types? Why not just offer the serial-in, parallel-out shift register? The answer is that they actually only offer the serial-in, parallel-out shift register, as long as it has no more than 8-bits. Note that serial-in, serial-out shift registers come in bigger than 8-bit lengths of 18 to 64-bits. It is not practical to offer a 64-bit serial-in, parallel-out shift register requiring that many output pins. See waveforms below for above shift register. The shift register has been cleared prior to any data by CLR’, an active low signal, which clears all type D Flip-Flops within the shift register. Note the serial data 1011 pattern presented at the SI input. This data is synchronized with the clock CLK. This would be the case if it is being shifted in from something like another shift register, for example, a parallel-in, serial-out shift register (not shown here). On the first clock at t1, the data 1 at SI is shifted from D to Q of the first shift register stage. After t2 this first data bit is at QB. After t3 it is at QC. After t4 it is at QD. Four clock pulses have shifted the first data bit all the way to the last stage QD. The second data bit a 0 is at QC after the 4th clock. The third data bit a 1 is at QB. The fourth data bit another 1 is at QA. Thus, the serial data input pattern 1011 is contained in (QD QC QB QA). It is now available on the four outputs. It will available on the four outputs from just after clock t4 to just before t5. This parallel data must be used or stored between these two times, or it will be lost due to shifting out the QD stage on following clocks t5 to t8as shown above. Serial-in, Parallel-out Devices Let’s take a closer look at serial-in, parallel-out shift registers available as integrated circuits, courtesy of Texas Instruments. For complete device data sheets, follow the links. • SN74ALS164A serial-in/ parallel-out 8-bit shift register [*] • SN74AHC594 serial-in/ parallel-out 8-bit shift register with output register [*] • SN74AHC595 serial-in/ parallel-out 8-bit shift register with output register [*] • CD4094 serial-in/ parallel-out 8-bit shift register with output register [*] [*] The 74ALS164A is almost identical to our prior diagram with the exception of the two serial inputs A and B. The unused input should be pulled high to enable the other input. We do not show all the stages above. However, all the outputs are shown on the ANSI symbol below, along with the pin numbers. The CLK input to the control section of the above ANSI symbol has two internal functions C1, control of anything with a prefix of 1. This would be clocking in of data at 1D. The second function, the arrow after the slash (/) is right (down) shifting of data within the shift register. The eight outputs are available to the right of the eight registers below the control section. The first stage is wider than the others to accommodate the A&B input. The above internal logic diagram is adapted from the TI (Texas Instruments) data sheet for the 74AHC594. The type “D” FFs in the top row comprise a serial-in, parallel-out shift register. This section works like the previously described devices. The outputs (QA’ QB to QH) of the shift register half of the device feed the type “D” FFs in the lower half in parallel. QH (pin 9) is shifted out to any optional cascaded device package. A single positive clock edge at RCLK will transfer the data from D to Q of the lower FFs. All 8-bits transfer in parallel to the output register (a collection of storage elements). The purpose of the output register is to maintain a constant data output while new data is being shifted into the upper shift register section. This is necessary if the outputs drive relays, valves, motors, solenoids, horns, or buzzers. This feature may not be necessary when driving LEDs as long as flicker during shifting is not a problem. Note that the 74AHC594 has separate clocks for the shift register (SRCLK) and the output register ( RCLK). Also, the shifter may be cleared by SRCLR and, the output register by RCLR. It desirable to put the outputs in a known state at power-on, in particular, if driving relays, motors, etc. The waveforms below illustrate shifting and latching of data. The above waveforms show shifting of 4-bits of data into the first four stages of 74AHC594, then the parallel transfer to the output register. In actual fact, the 74AHC594 is an 8-bit shift register, and it would take 8-clocks to shift in 8-bits of data, which would be the normal mode of operation. However, the 4-bits we show saves space and adequately illustrates the operation. We clear the shift register half a clock prior to t0 with SRCLR’=0. SRCLR’ must be released back high prior to shifting. Just prior to t0 the output register is cleared by RCLR’=0. It, too, is released ( RCLR’=1). Serial data 1011 is presented at the SI pin between clocks t0 and t4. It is shifted in by clocks t1 t2 t3 t4appearing at internal shift stages QA’ QB’ QC’ QD. This data is present at these stages between t4 and t5. After t5 the desired data (1011) will be unavailable on these internal shifter stages. Between t4 and t5 we apply a positive going RCLK transferring data 1011 to register outputs QA QB QC QD . This data will be frozen here as more data (0s) shifts in during the succeeding SRCLKs (t5 to t8). There will not be a change in data here until another RCLK is applied. The 74AHC595 is identical to the ‘594 except that the RCLR’ is replaced by an OE’ enabling a tri-state buffer at the output of each of the eight output register bits. Though the output register cannot be cleared, the outputs may be disconnected by OE’=1. This would allow external pull-up or pull-down resistors to force any relay, solenoid, or valve drivers to a known state during a system power-up. Once the system is powered-up and, say, a microprocessor has shifted and latched data into the ‘595, the output enable could be asserted (OE’=0) to drive the relays, solenoids, and valves with valid data, but, not before that time. Above are the proposed ANSI symbols for these devices. C3 clocks data into the serial input (external SER) as indicated by the 3 prefix of 2,3D. The arrow after C3/ indicates shifting right (down) of the shift register, the 8-stages to the left of the ‘595symbol below the control section. The 2 prefix of 2,3D and 2D indicates that these stages can be reset by R2 (external SRCLR’). The 1 prefix of 1,4D on the ‘594 indicates that R1 (external RCLR’) may reset the output register, which is to the right of the shift register section. The ‘595, which has an EN at external OE’ cannot reset the output register. But, the EN enables tristate (inverted triangle) output buffers. The right pointing triangle of both the ‘594 and‘595 indicates internal buffering. Both the ‘594 and‘595 output registers are clocked by C4 as indicated by 4 of 1,4D and 4D respectively. The CD4094B is a 3 to 15VDC capable latching shift register alternative to the previous 74AHC594 devices. CLOCK, C1, shifts data in at SERIAL IN as implied by the 1 prefix of 1D. It is also the clock of the right shifting shift register (left half of the symbol body) as indicated by the /(right-arrow) of C1/(arrow) at the CLOCK input. STROBE, C2 is the clock for the 8-bit output register to the right of the symbol body. The 2 of 2D indicates that C2 is the clock for the output register. The inverted triangle in the output latch indicates that the output is tristated, being enabled by EN3. The 3 preceding the inverted triangle and the 3 of EN3 are often omitted, as any enable (EN) is understood to control the tristate outputs. QS and QS are non-latched outputs of the shift register stage. QS could be cascaded to SERIAL IN of a succeeding device. Practical Applications A real-world application of the serial-in, parallel-out shift register is to output data from a microprocessor to a remote panel indicator. Or, another remote output device which accepts serial format data. The figure “Alarm with remote key pad” is repeated here from the parallel-in, serial-out section with the addition of the remote display. Thus, we can display, for example, the status of the alarm loops connected to the main alarm box. If the Alarm detects an open window, it can send serial data to the remote display to let us know. Both the keypad and the display would likely be contained within the same remote enclosure, separate from the main alarm box. However, we will only look at the display panel in this section. If the display were on the same board as the Alarm, we could just run eight wires to the eight LEDs along with two wires for power and ground. These eight wires are much less desirable on a long run to a remote panel. Using shift registers, we only need to run five wires- clock, serial data, a strobe, power, and ground. If the panel were just a few inches away from the main board, it might still be desirable to cut down on the number of wires in a connecting cable to improve reliability. Also, we sometimes use up most of the available pins on a microprocessor and need to use serial techniques to expand the number of outputs. Some integrated circuit output devices, such as Digital to Analog converters contain serial-in, parallel-out shift registers to receive data from microprocessors. The techniques illustrated here are applicable to those parts. We have chosen the 74AHC594 serial-in, parallel-out shift register with output register; though, it requires an extra pin, RCLK, to parallel load the shifted-in data to the output pins. This extra pin prevents the outputs from changing while data is shifting in. This is not much of a problem for LEDs. But, it would be a problem if driving relays, valves, motors, etc. Code executed within the microprocessor would start with 8-bits of data to be output. One bit would be output on the “Serial data out” pin, driving SER of the remote 74AHC594. Next, the microprocessor generates a low to high transition on “Shift clock”, driving SRCLK of the ‘595 shift register. This positive clock shifts the data bit at SER from “D” to “Q” of the first shift register stage. This has no effect on the QALED at this time because of the internal 8-bit output register between the shift register and the output pins (QA to QH). Finally, “Shift clock” is pulled back low by the microprocessor. This completes the shifting of one bit into the ‘595. The above procedure is repeated seven more times to complete the shifting of 8-bits of data from the microprocessor into the 74AHC594 serial-in, parallel-out shift register. To transfer the 8-bits of data within the internal ‘595 shift register to the output requires that the microprocessor generate a low to high transition on RCLK, the output register clock. This applies new data to the LEDs. The RCLK needs to be pulled back low in anticipation of the next 8-bit transfer of data. The data present at the output of the ‘595 will remain until the process in the above two paragraphs is repeated for a new 8-bits of data. In particular, new data can be shifted into the ‘595 internal shift register without affecting the LEDs. The LEDs will only be updated with new data with the application of the RCLKrising edge. What if we need to drive more than eight LEDs? Simply cascade another 74AHC594 SER pin to the QH of the existing shifter. Parallel the SRCLK and RCLK pins. The microprocessor would need to transfer 16-bits of data with 16-clocks before generating an RCLK feeding both devices. The discrete LED indicators, which we show, could be 7-segment LEDs. Though, there are LSI (Large Scale Integration) devices capable of driving several 7-segment digits. This device accepts data from a microprocessor in a serial format, driving more LED segments than it has pins by multiplexing the LEDs. For example, see the link below for the MAX6955 datasheet.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/12%3A_Shift_Registers/12.04%3A_Shift_Registers-_Serial-in%2C_Parallel-out_%28SIPO%29_Conversion.txt
The purpose of the parallel-in/ parallel-out shift register is to take in parallel data, shift it, then output it as shown below. A universal shift register is a do-everything device in addition to the parallel-in/ parallel-out function. Above we apply four bit of data to a parallel-in/ parallel-out shift register at DA DB DC DD. The mode control, which may be multiple inputs, controls parallel loading vs shifting. The mode control may also control the direction of shifting in some real devices. The data will be shifted one bit position for each clock pulse. The shifted data is available at the outputs QA QB QC QD . The “data in” and “data out” are provided for cascading of multiple stages. Though, above, we can only cascade data for right shifting. We could accommodate cascading of left-shift data by adding a pair of left pointing signals, “data in” and “data out”, above. The internal details of a right shifting parallel-in/ parallel-out shift register are shown below. The tri-state buffers are not strictly necessary to the parallel-in/ parallel-out shift register, but are part of the real-world device shown below. The 74LS395 so closely matches our concept of a hypothetical right shifting parallel-in/ parallel-out shift register that we use an overly simplified version of the data sheet details above. See the link to the full data sheet more more details, later in this chapter. LD/SH’ controls the AND-OR multiplexer at the data input to the FF’s. If LD/SH’=1, the upper four AND gates are enabled allowing application of parallel inputs DA DB DC DD to the four FF data inputs. Note the inverter bubble at the clock input of the four FFs. This indicates that the 74LS395 clocks data on the negative going clock, which is the high to low transition. The four bits of data will be clocked in parallel from DA DB DC DD to QA QB QC QD at the next negative going clock. In this “real part”, OC’ must be low if the data needs to be available at the actual output pins as opposed to only on the internal FFs. The previously loaded data may be shifted right by one bit position if LD/SH’=0 for the succeeding negative going clock edges. Four clocks would shift the data entirely out of our 4-bit shift register. The data would be lost unless our device was cascaded from QD to SER of another device. Above, a data pattern is presented to inputs DA DB DC DD. The pattern is loaded to QA QB QC QD . Then it is shifted one bit to the right. The incoming data is indicated by X, meaning the we do no know what it is. If the input (SER) were grounded, for example, we would know what data (0) was shifted in. Also shown, is right shifting by two positions, requiring two clocks. The above figure serves as a reference for the hardware involved in right shifting of data. It is too simple to even bother with this figure, except for comparison to more complex figures to follow. Right shifting of data is provided above for reference to the previous right shifter. If we need to shift left, the FFs need to be rewired. Compare to the previous right shifter. Also, SI and SOhave been reversed. SI shifts to QC. QC shifts to QB. QB shifts to QA. QA leaves on the SO connection, where it could cascade to another shifter SI. This left shift sequence is backwards from the right shift sequence. Above we shift the same data pattern left by one bit. There is one problem with the “shift left” figure above. There is no market for it. Nobody manufactures a shift-left part. A “real device” which shifts one direction can be wired externally to shift the other direction. Or, should we say there is no left or right in the context of a device which shifts in only one direction. However, there is a market for a device which will shift left or right on command by a control line. Of course, left and right are valid in that context. What we have above is a hypothetical shift register capable of shifting either direction under the control of L’/R. It is setup with L’/R=1 to shift the normal direction, right. L’/R=1 enables the multiplexer AND gates labeled R. This allows data to follow the path illustrated by the arrows, when a clock is applied. The connection path is the same as the"too simple” “shift right” figure above. Data shifts in at SR, to QA, to QB, to QC, where it leaves at SR cascade. This pin could drive SR of another device to the right. What if we change L’/R to L’/R=0? With L’/R=0, the multiplexer AND gates labeled L are enabled, yielding a path, shown by the arrows, the same as the above “shift left” figure. Data shifts in at SL, to QC, to QB, to QA, where it leaves at SL cascade. This pin could drive SL of another device to the left. The prime virtue of the above two figures illustrating the “shift left/ right register” is simplicity. The operation of the left right control L’/R=0 is easy to follow. A commercial part needs the parallel data loading implied by the section title. This appears in the figure below. Now that we can shift both left and right via L’/R, let us add SH/LD’, shift/ load, and the AND gates labeled “load” to provide for parallel loading of data from inputs DA DB DC. When SH/LD’=0, AND gates R and L are disabled, AND gates “load” are enabled to pass data DA DB DC to the FF data inputs. the next clock CLKwill clock the data to QA QB QC. As long as the same data is present it will be re-loaded on succeeding clocks. However, data present for only one clock will be lost from the outputs when it is no longer present on the data inputs. One solution is to load the data on one clock, then proceed to shift on the next four clocks. This problem is remedied in the 74ALS299 by the addition of another AND gate to the multiplexer. If SH/LD’ is changed to SH/LD’=1, the AND gates labeled “load” are disabled, allowing the left/ right control L’/R to set the direction of shift on the L or R AND gates. Shifting is as in the previous figures. The only thing needed to produce a viable integrated device is to add the fourth AND gate to the multiplexer as alluded for the 74ALS299. This is shown in the next section for that part. Parallel-in/ parallel-out and universal devices Let’s take a closer look at Serial-in/ parallel-out shift registers available as integrated circuits, courtesy of Texas Instruments. For complete device data sheets, follow the links. • SN74LS395A parallel-in/ parallel-out 4-bit shift register [*] • SN74ALS299 parallel-in/ parallel-out 8-bit universal shift register [*] We have already looked at the internal details of the SN74LS395A, see above previous figure, 74LS395 parallel-in/ parallel-out shift register with tri-state output. Directly above is the ANSI symbol for the 74LS395. Why only 4-bits, as indicated by SRG4 above? Having both parallel inputs, and parallel outputs, in addition to control and power pins, does not allow for any more I/O (Input/Output) bits in a 16-pin DIP (Dual Inline Package). R indicates that the shift register stages are reset by input CLR’ (active low- inverting half arrow at input) of the control section at the top of the symbol. OC’, when low, (invert arrow again) will enable (EN4) the four tristate output buffers (QA QB QC QD ) in the data section. Load/shift’ (LD/SH’) at pin (7) corresponds to internals M1 (load) and M2 (shift). Look for prefixes of 1 and 2 in the rest of the symbol to ascertain what is controlled by these. The negative edge sensitive clock (indicated by the invert arrow at pin-10) C3/2has two functions. First, the 3 of C3/2 affects any input having a prefix of 3, say 2,3D or 1,3D in the data section. This would be parallel load at A, B, C, D attributed to M1 and C3 for 1,3D. Second, 2 of C3/2-right-arrow indicates data clocking wherever 2 appears in a prefix (2,3D at pin-2). Thus we have clocking of data at SER into QA with mode 2 . The right arrow after C3/2 accounts for shifting at internal shift register stages QA QB QC QD. The right pointing triangles indicate buffering; the inverted triangle indicates tri-state, controlled by the EN4. Note, all the 4s in the symbol associated with the EN are frequently omitted. Stages QB QC are understood to have the same attributes as QD. QD cascades to the next package’s SER to the right. The table above, condensed from the data ‘299 data sheet, summarizes the operation of the 74ALS299 universal shift/ storage register. Follow the ‘299 link above for full details. The Multiplexer gates R, L, loadoperate as in the previous “shift left/ right register” figures. The difference is that the mode inputs S1 and S0select shift left, shift right, and load with mode set to S1 S0 = to 01, 10, and 11respectively as shown in the table, enabling multiplexer gates L, R, and load respectively. See table. A minor difference is the parallel load path from the tri-state outputs. Actually the tri-state buffers are (must be) disabled by S1 S0 = 11 to float the I/O bus for use as inputs. A bus is a collection of similar signals. The inputs are applied to A, Bthrough H (same pins as QA, QB, through QH) and routed to the load gate in the multiplexers, and on the the D inputs of the FFs. Data is parallel load on a clock pulse. The one new multiplexer gate is the AND gate labeled hold, enabled by S1 S0 = 00. The hold gate enables a path from the Q output of the FF back to the hold gate, to the D input of the same FF. The result is that with mode S1 S0 = 00, the output is continuously re-loaded with each new clock pulse. Thus, data is held. This is summarized in the table. To read data from outputs QA, QB, through QH, the tri-state buffers must be enabled by OE2’, OE1’ =00 and mode =S1 S0 = 00, 01, or 10. That is, mode is anything except load. See second table. Right shift data from a package to the left, shifts in on the SR input. Any data shifted out to the right from stage QH cascades to the right via QH’. This output is unaffected by the tri-state buffers. The shift right sequence for S1 S0 = 10 is: SR > QA > QB > QC > QD > QE > QF > QG > QH (QH’) Left shift data from a package to the right shifts in on the SL input. Any data shifted out to the left from stage QA cascades to the left via QA’, also unaffected by the tri-state buffers. The shift left sequence for S1 S0 = 01 is: (QA’) QA < QB < QC < QD < QE < QF < QG < QH (QSL’) Shifting may take place with the tri-state buffers disabled by one of OE2’ or OE1’ = 1. Though, the register contents outputs will not be accessible. See table. The “clean” ANSI symbol for the SN74ALS299 parallel-in/ parallel-out 8-bit universal shift register with tri-state output is shown for reference above. The annotated version of the ANSI symbol is shown to clarify the terminology contained therein. Note that the ANSI mode (S0 S1) is reversed from the order (S1 S0) used in the previous table. That reverses the decimal mode numbers (1 & 2). In any event, we are in complete agreement with the official data sheet, copying this inconsistency. Practical applications The Alarm with remote keypad block diagram is repeated below. Previously, we built the keypad reader and the remote display as separate units. Now we will combine both the keypad and display into a single unit using a universal shift register. Though separate in the diagram, the Keypad and Display are both contained within the same remote enclosure. We will parallel load the keyboard data into the shift register on a single clock pulse, then shift it out to the main alarm box. At the same time , we will shift LED data from the main alarm to the remote shift register to illuminate the LEDs. We will be simultaneously shifting keyboard data out and LED data into the shift register. Eight LEDs and current limiting resistors are connected to the eight I/O pins of the 74ALS299 universal shift register. The LEDS can only be driven during Mode 3 with S1=0 S0=0. The OE1’ and OE2’ tristate enables are grounded to permenantly enable the tristate outputs during modes 0, 1, 2. That will cause the LEDS to light (flicker) during shifting. If this were a problem the EN1’ and EN2’ could be ungrounded and paralleled with S1 and S0 respectively to only enable the tristate buffers and light the LEDS during hold, mode 3. Let’s keep it simple for this example. During parallel loading, S0=1 inverted to a 0, enables the octal tristate buffers to ground the switch wipers. The upper, open, switch contacts are pulled up to logic high by the resister-LED combination at the eight inputs. Any switch closure will short the input low. We parallel load the switch data into the ‘299 at clock t0when both S0 and S1 are high. See waveforms below. Once S0 goes low, eight clocks (t0 tot8) shift switch closure data out of the ‘299 via the Qh pin. At the same time, new LED data is shifted in at SR of the 299 by the same eight clocks. The LED data replaces the switch closure data as shifting proceeds. After the 8th shift clock, t8, S1 goes low to yield hold mode (S1 S0 = 00). The data in the shift register remains the same even if there are more clocks, for example, T9, t10, etc. Where do the waveforms come from? They could be generated by a microprocessor if the clock rate were not over 100 kHz, in which case, it would be inconvenient to generate any clocks after t8. If the clock was in the megahertz range, the clock would run continuously. The clock, S1 and S0 would be generated by digital logic, not shown here.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/12%3A_Shift_Registers/12.05%3A_Universal_Shift_Registers-_Parallel-in%2C_Parallel-out.txt
If the output of a shift register is fed back to the input. a ring counter results. The data pattern contained within the shift register will recirculate as long as clock pulses are applied. For example, the data pattern will repeat every four clock pulses in the figure below. However, we must load a data pattern. All 0‘s or all 1‘s doesn’t count. Is a continuous logic level from such a condition useful? We make provisions for loading data into the parallel-in/ serial-out shift register configured as a ring counter below. Any random pattern may be loaded. The most generally useful pattern is a single 1. Loading binary 1000 into the ring counter, above, prior to shifting yields a viewable pattern. The data pattern for a single stage repeats every four clock pulses in our 4-stage example. The waveforms for all four stages look the same, except for the one clock time delay from one stage to the next. See figure below. The circuit above is a divide by 4 counter. Comparing the clock input to any one of the outputs, shows a frequency ratio of 4:1. How may stages would we need for a divide by 10 ring counter? Ten stages would recirculate the 1 every 10 clock pulses. An alternate method of initializing the ring counter to 1000 is shown above. The shift waveforms are identical to those above, repeating every fourth clock pulse. The requirement for initialization is a disadvantage of the ring counter over a conventional counter. At a minimum, it must be initialized at power-up since there is no way to predict what state flip-flops will power up in. In theory, initialization should never be required again. In actual practice, the flip-flops could eventually be corrupted by noise, destroying the data pattern. A “self correcting” counter, like a conventional synchronous binary counter would be more reliable. The above binary synchronous counter needs only two stages, but requires decoder gates. The ring counter had more stages, but was self decoding, saving the decode gates above. Another disadvantage of the ring counter is that it is not “self starting”. If we need the decoded outputs, the ring counter looks attractive, in particular, if most of the logic is in a single shift register package. If not, the conventional binary counter is less complex without the decoder. The waveforms decoded from the synchronous binary counter are identical to the previous ring counter waveforms. The counter sequence is (QA QB) = (00 01 10 11). Johnson counters The switch-tail ring counter, also know as the Johnson counter, overcomes some of the limitations of the ring counter. Like a ring counter a Johnson counter is a shift register fed back on its’ self. It requires half the stages of a comparable ring counter for a given division ratio. If the complement output of a ring counter is fed back to the input instead of the true output, a Johnson counter results. The difference between a ring counter and a Johnson counter is which output of the last stage is fed back (Q or Q’). Carefully compare the feedback connection below to the previous ring counter. This “reversed” feedback connection has a profound effect upon the behavior of the otherwise similar circuits. Recirculating a single 1 around a ring counter divides the input clock by a factor equal to the number of stages. Whereas, a Johnson counter divides by a factor equal to twice the number of stages. For example, a 4-stage ring counter divides by 4. A 4-stage Johnson counter divides by 8. Start a Johnson counter by clearing all stages to 0s before the first clock. This is often done at power-up time. Referring to the figure below, the first clock shifts three 0s from ( QA QB QC) to the right into ( QB QCQD). The 1 at QD(the complement of Q) is shifted back into QA. Thus, we start shifting 1s to the right, replacing the 0s. Where a ring counter recirculated a single 1, the 4-stage Johnson counter recirculates four 0s then four 1s for an 8-bit pattern, then repeats. The above waveforms illustrates that multi-phase square waves are generated by a Johnson counter. The 4-stage unit above generates four overlapping phases of 50% duty cycle. How many stages would be required to generate a set of three phase waveforms? For example, a three stage Johnson counter, driven by a 360 Hertz clock would generate three 120o phased square waves at 60 Hertz. The outputs of the flop-flops in a Johnson counter are easy to decode to a single state. Below for example, the eight states of a 4-stage Johnson counter are decoded by no more than a two input gate for each of the states. In our example, eight of the two input gates decode the states for our example Johnson counter. No matter how long the Johnson counter, only 2-input decoder gates are needed. Note, we could have used uninverted inputs to the AND gates by changing the gate inputs from true to inverted at the FFs, Q to Q’, (and vice versa). However, we are trying to make the diagram above match the data sheet for the CD4022B, as closely as practical. Above, our four phased square waves QA to QD are decoded to eight signals (G0 to G7) active during one clock period out of a complete 8-clock cycle. For example, G0 is active high when both QA and QD are low. Thus, pairs of the various register outputs define each of the eight states of our Johnson counter example. Above is the more complete internal diagram of the CD4022B Johnson counter. See the manufacturers’ data sheet for minor details omitted. The major new addition to the diagram as compared to previous figures is the disallowed state detector composed of the two NOR gates. Take a look at the inset state table. There are 8-permissible states as listed in the table. Since our shifter has four flip-flops, there are a total of 16-states, of which there are 8-disallowed states. That would be the ones not listed in the table. In theory, we will not get into any of the disallowed states as long as the shift register is RESET before first use. However, in the “real world” after many days of continuous operation due to unforeseen noise, power line disturbances, near lightning strikes, etc, the Johnson counter could get into one of the disallowed states. For high reliability applications, we need to plan for this slim possibility. More serious is the case where the circuit is not cleared at power-up. In this case there is no way to know which of the 16-states the circuit will power up in. Once in a disallowed state, the Johnson counter will not return to any of the permissible states without intervention. That is the purpose of the NOR gates. Examine the table for the sequence (QA QB QC) = (010). Nowhere does this sequence appear in the table of allowed states. Therefore (010) is disallowed. It should never occur. If it does, the Johnson counter is in a disallowed state, which it needs to exit to any allowed state. Suppose that (QA QB QC) = (010). The second NOR gate will replace QB = 1 with a 0 at the D input to FF QC. In other words, the offending 010 is replaced by 000. And 000, which does appear in the table, will be shifted right. There are may triple-0 sequences in the table. This is how the NOR gates get the Johnson counter out of a disallowed state to an allowed state. Not all disallowed states contain a 010 sequence. However, after a few clocks, this sequence will appear so that any disallowed states will eventually be escaped. If the circuit is powered-up without a RESET, the outputs will be unpredictable for a few clocks until an allowed state is reached. If this is a problem for a particular application, be sure to RESET on power-up. Johnson counter devices A pair of integrated circuit Johnson counter devices with the output states decoded is available. We have already looked at the CD4017 internal logic in the discussion of Johnson counters. The 4000 series devices can operate from 3V to 15V power supplies. The the 74HC’ part, designed for a TTL compatiblity, can operate from a 2V to 6V supply, count faster, and has greater output drive capability. For complete device data sheets, follow the links. • CD4017 Johnson counter with 10 decoded outputs CD4022 Johnson counter with 8 decoded outputs [*] • 74HC4017 Johnson counter, 10 decoded outputs [*] The ANSI symbols for the modulo-10 (divide by 10) and modulo-8 Johnson counters are shown above. The symbol takes on the characteristics of a counter rather than a shift register derivative, which it is. Waveforms for the CD4022 modulo-8 and operation were shown previously. The CD4017B/ 74HC4017 decade counter is a 5-stage Johnson counter with ten decoded outputs. The operation and waveforms are similar to the CD4017. In fact, the CD4017 and CD4022 are both detailed on the same data sheet. See above links. The 74HC4017 is a more modern version of the decade counter. These devices are used where decoded outputs are needed instead of the binary or BCD (Binary Coded Decimal) outputs found on normal counters. By decoded, we mean one line out of the ten lines is active at a time for the ‘4017 in place of the four bit BCD code out of conventional counters. See previous waveforms for 1-of-8 decoding for the ‘4022 Octal Johnson counter. Practical applications The above Johnson counter shifts a lighted LED each fifth of a second around the ring of ten. Note that the 74HC4017 is used instead of the ‘40017 because the former part has more current drive capability. From the data sheet, (at the link above) operating at VCC= 5V, the VOH= 4.6V at 4ma. In other words, the outputs can supply 4 ma at 4.6 V to drive the LEDs. Keep in mind that LEDs are normally driven with 10 to 20 ma of current. Though, they are visible down to 1 ma. This simple circuit illustrates an application of the ‘HC4017. Need a bright display for an exhibit? Then, use inverting buffers to drive the cathodes of the LEDs pulled up to the power supply by lower value anode resistors. The 555 timer, serving as an astable multivibrator, generates a clock frequency determined by R1 R2 C1. This drives the 74HC4017 a step per clock as indicated by a single LED illuminated on the ring. Note, if the 555 does not reliably drive the clock pin of the ‘4015, run it through a single buffer stage between the 555 and the ‘4017. A variable R2 could change the step rate. The value of decoupling capacitor C2 is not critical. A similar capacitor should be applied across the power and ground pins of the ‘4017. The Johnson counter above generates 3-phase square waves, phased 60o apart with respect to (QA QBQC). However, we need 120o phased waveforms of power applications (see Volume II, AC). Choosing P1=QA P2=QC P3=QB yields the 120o phasing desired. See figure below. If these (P1 P2 P3) are low-pass filtered to sine waves and amplified, this could be the beginnings of a 3-phase power supply. For example, do you need to drive a small 3-phase 400 Hz aircraft motor? Then, feed 6x 400Hz to the above circuit CLOCK. Note that all these waveforms are 50% duty cycle. The circuit below produces 3-phase nonoverlapping, less than 50% duty cycle, waveforms for driving 3-phase stepper motors. Above we decode the overlapping outputs QA QB QC to non-overlapping outputs P0 P1 P2 as shown below. These waveforms drive a 3-phase stepper motor after suitable amplification from the milliamp level to the fractional amp level using the ULN2003 drivers shown above, or the discrete component Darlington pair driver shown in the circuit which follow. Not counting the motor driver, this circuit requires three IC (Integrated Circuit) packages: two dual type “D” FF packages and a quad NAND gate. A single CD4017, above, generates the required 3-phase stepper waveforms in the circuit above by clearing the Johnson counter at count 3. Count 3 persists for less than a microsecond before it clears its’ self. The other counts (Q0=G0 Q1=G1 Q2=G2) remain for a full clock period each. The Darlington bipolar transistor drivers shown above are a substitute for the internal circuitry of the ULN2003. The design of drivers is beyond the scope of this digital electronics chapter. Either driver may be used with either waveform generator circuit. The above waceforms make the most sense in the context of the internal logic of the CD4017 shown earlier in this section. Though, the AND gating equations for the internal decoder are shown. The signals QA QBQC are Johnson counter direct shift register outputs not available on pin-outs. The QD waveform shows resetting of the ‘4017 every three clocks. Q0 Q1 Q2, etc. are decoded outputs which actually are available at output pins. Above we generate waveforms for driving a unipolar stepper motor, which only requires one polarity of driving signal. That is, we do not have to reverse the polarity of the drive to the windings. This simplifies the power driver between the ‘4017 and the motor. Darlington pairs from a prior diagram may be substituted for the ULN3003. Once again, the CD4017B generates the required waveforms with a reset after the teminal count. The decoded outputs Q0 Q1 Q2 Q3 sucessively drive the stepper motor windings, with Q4 reseting the counter at the end of each group of four pulses.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/12%3A_Shift_Registers/12.06%3A_Ring_Counters.txt
Connecting digital circuitry to sensor devices is simple if the sensor devices are inherently digital themselves. Switches, relays, and encoders are easily interfaced with gate circuits due to the on/off nature of their signals. However, when analog devices are involved, interfacing becomes much more complex. What is needed is a way to electronically translate analog signals into digital (binary) quantities, and vice versa. An analog-to-digital converter, or ADC, performs the former task while a digital-to-analog converter, or DAC, performs the latter. An ADC inputs an analog electrical signal such as voltage or current and outputs a binary number. In block diagram form, it can be represented as such: A DAC, on the other hand, inputs a binary number and outputs an analog voltage or current signal. In block diagram form, it looks like this: Together, they are often used in digital systems to provide complete interface with analog sensors and output devices for control systems such as those used in automotive engine controls: It is much easier to convert a digital signal into an analog signal than it is to do the reverse. Therefore, we will begin with DAC circuitry and then move to ADC circuitry. 2nR DAC- Binary-Weighted-Input Digital-to-Analog Converter This page was auto-generated because a user created a sub-page to this page. 13.02: The R This page was auto-generated because a user created a sub-page to this page. 13.2.01: The R What Is a R/2nR DAC Circuit? The R/2nR DAC circuit, otherwise known as the binary-weighted-input DAC, is a variation on the inverting summing op-amp circuit. (Note that “summing” circuits are sometimes also referred to as “summer” circuits.) If you recall, the classic inverting summing circuit is an operational amplifier using negative feedback for controlled gain, with several voltage inputs and one voltage output. The output voltage is the inverted (opposite polarity) sum of all input voltages: For a simple inverting summing circuit, all resistors must be of equal value. If any of the input resistors were different, the input voltages would have different degrees of effect on the output, and the output voltage would not be a true sum. Example: An R/2nR DAC with Multiple Input Resistor Values Let’s consider, however, intentionally setting the input resistors at different values. Suppose we were to set the input resistor values at multiple powers of two: R, 2R, and 4R, instead of all the same value R: Starting from V1 and going through V3, this would give each input voltage exactly half the effect on the output as the voltage before it. In other words, input voltage V1 has a 1:1 effect on the output voltage (gain of 1), while input voltage V2 has half that much effect on the output (a gain of 1/2), and V3 half of that (a gain of 1/4). These ratios were not arbitrarily chosen: they are the same ratios corresponding to place weights in the binary numeration system. If we drive the inputs of this circuit with digital gates so that each input is either 0 volts or full supply voltage, the output voltage will be an analog representation of the binary value of these three bits. If we chart the output voltages for all eight combinations of binary bits (000 through 111) input to this circuit, we will get the following progression of voltages: Note that with each step in the binary count sequence, there results a 1.25 volt change in the output. This circuit is very easy to simulate using SPICE. In the following simulation, I set up the DAC circuit with a binary input of 110 (note the first node numbers for resistors R1, R2, and R3: a node number of “1” connects it to the positive side of a 5 volt battery, and a node number of “0” connects it to ground). The output voltage appears on node 6 in the simulation: We can adjust resistors values in this circuit to obtain output voltages directly corresponding to the binary input. For example, by making the feedback resistor 800 Ω instead of 1 kΩ, the DAC will output -1 volt for the binary input 001, -4 volts for the binary input 100, -7 volts for the binary input 111, and so on. If we wish to expand the resolution of this DAC (add more bits to the input), all we need to do is add more input resistors, holding to the same power-of-two sequence of values: It should be noted that all logic gates must output exactly the same voltages when in the “high” state. If one gate is outputting +5.02 volts for a “high” while another is outputting only +4.86 volts, the analog output of the DAC will be adversely affected. Likewise, all “low” voltage levels should be identical between gates, ideally 0.00 volts exactly. It is recommended that CMOS output gates are used, and that input/feedback resistor values are chosen so as to minimize the amount of current each gate has to source or sink.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/13%3A_Digital-Analog_Conversion/13.01%3A_Introduction_to_Digital-Analog_Conversion.txt
This page was auto-generated because a user created a sub-page to this page. 13.03: The R This page was auto-generated because a user created a sub-page to this page. 13.3.01: The R The R/2R DAC circuit is an alternative to the binary-weighted-input (R/2nR) DAC which uses fewer unique resistor values. R/2R DAC vs. R/2nR DAC A disadvantage of the former DAC design was its requirement of several different precise input resistor values: one unique value per binary input bit. Manufacture may be simplified if there are fewer different resistor values to purchase, stock, and sort prior to assembly. Of course, we could take the binary-weighted-input DAC circuit and modify it to use a single input resistance value, by connecting multiple resistors together in series: Unfortunately, this approach merely substitutes one type of complexity for another: volume of components over diversity of component values. There is, however, a more efficient design methodology. What Is an R/2R Ladder DAC? By constructing a different kind of resistor network on the input of our summing circuit, we can achieve the same kind of binary weighting with only two kinds of resistor values, and with only a modest increase in resistor count. This “ladder” network looks like this: Mathematically analyzing this ladder network is a bit more complex than for the previous circuit, where each input resistor provided an easily-calculated gain for that bit. For those who are interested in pursuing the intricacies of this circuit further, you may opt to use Thevenin’s theorem for each binary input (remember to consider the effects of the virtual ground), and/or use a simulation program like SPICE to determine circuit response. Either way, you should obtain the following table of figures: 13.04: Flash ADC Also called the parallel A/D converter, this circuit is the simplest to understand. It is formed of a series of comparators, each one comparing the input signal to a unique reference voltage. The comparator outputs connect to the inputs of a priority encoder circuit, which then produces a binary output. The following illustration shows a 3-bit flash ADC circuit: Vref is a stable reference voltage provided by a precision voltage regulator as part of the converter circuit, not shown in the schematic. As the analog input voltage exceeds the reference voltage at each comparator, the comparator outputs will sequentially saturate to a high state. The priority encoder generates a binary number based on the highest-order active input, ignoring all other active inputs. When operated, the flash ADC produces an output that looks something like this: For this particular application, a regular priority encoder with all its inherent complexity isn’t necessary. Due to the nature of the sequential comparator output states (each comparator saturating “high” in sequence from lowest to highest), the same “highest-order-input selection” effect may be realized through a set of Exclusive-OR gates, allowing the use of a simpler, non-priority encoder: And, of course, the encoder circuit itself can be made from a matrix of diodes, demonstrating just how simply this converter design may be constructed: Not only is the flash converter the simplest in terms of operational theory, but it is the most efficient of the ADC technologies in terms of speed, being limited only in comparator and gate propagation delays. Unfortunately, it is the most component-intensive for any given number of output bits. This three-bit flash ADC requires seven comparators. A four-bit version would require 15 comparators. With each additional output bit, the number of required comparators doubles. Considering that eight bits is generally considered the minimum necessary for any practical ADC (255 comparators needed!), the flash methodology quickly shows its weakness. An additional advantage of the flash converter, often overlooked, is the ability for it to produce a non-linear output. With equal-value resistors in the reference voltage divider network, each successive binary count represents the same amount of analog signal increase, providing a proportional response. For special applications, however, the resistor values in the divider network may be made non-equal. This gives the ADC a custom, nonlinear response to the analog input signal. No other ADC design is able to grant this signal-conditioning behavior with just a few component value changes.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/13%3A_Digital-Analog_Conversion/13.03%3A_The_R/2R_DAC_%28Digital-to-Analog_Converter%29.txt
Also known as the stairstep-ramp, or simply counter A/D converter, this is also fairly easy to understand but unfortunately suffers from several limitations. The basic idea is to connect the output of a free-running binary counter to the input of a DAC, then compare the analog output of the DAC with the analog input signal to be digitized and use the comparator’s output to tell the counter when to stop counting and reset. The following schematic shows the basic idea: As the counter counts up with each clock pulse, the DAC outputs a slightly higher (more positive) voltage. This voltage is compared against the input voltage by the comparator. If the input voltage is greater than the DAC output, the comparator’s output will be high and the counter will continue counting normally. Eventually, though, the DAC output will exceed the input voltage, causing the comparator’s output to go low. This will cause two things to happen: first, the high-to-low transition of the comparator’s output will cause the shift register to “load” whatever binary count is being output by the counter, thus updating the ADC circuit’s output; secondly, the counter will receive a low signal on the active-low LOAD input, causing it to reset to 00000000 on the next clock pulse. The effect of this circuit is to produce a DAC output that ramps up to whatever level the analog input signal is at, output the binary number corresponding to that level, and start over again. Plotted over time, it looks like this: Note how the time between updates (new digital output values) changes depending on how high the input voltage is. For low signal levels, the updates are rather close-spaced. For higher signal levels, they are spaced further apart in time: For many ADC applications, this variation in update frequency (sample time) would not be acceptable. This, and the fact that the circuit’s need to count all the way from 0 at the beginning of each count cycle makes for relatively slow sampling of the analog signal, places the digital-ramp ADC at a disadvantage to other counter strategies. 13.06: Successive Approximation ADC One method of addressing the digital ramp ADC’s shortcomings is the so-called successive-approximation ADC. The only change in this design is a very special counter circuit known as a successive-approximation register. Instead of counting up in binary sequence, this register counts by trying all values of bits starting with the most-significant bit and finishing at the least-significant bit. Throughout the count process, the register monitors the comparator’s output to see if the binary count is less than or greater than the analog signal input, adjusting the bit values accordingly. The way the register counts is identical to the “trial-and-fit” method of decimal-to-binary conversion, whereby different values of bits are tried from MSB to LSB to get a binary number that equals the original decimal number. The advantage to this counting strategy is much faster results: the DAC output converges on the analog signal input in much larger steps than with the 0-to-full count sequence of a regular counter. Without showing the inner workings of the successive-approximation register (SAR), the circuit looks like this: It should be noted that the SAR is generally capable of outputting the binary number in serial (one bit at a time) format, thus eliminating the need for a shift register. Plotted over time, the operation of a successive-approximation ADC looks like this: Note how the updates for this ADC occur at regular intervals, unlike the digital ramp ADC circuit. 13.07: Tracking ADC A third variation on the counter-DAC-based converter theme is, in my estimation, the most elegant. Instead of a regular “up” counter driving the DAC, this circuit uses an up/down counter. The counter is continuously clocked, and the up/down control line is driven by the output of the comparator. So, when the analog input signal exceeds the DAC output, the counter goes into the “count up” mode. When the DAC output exceeds the analog input, the counter switches into the “count down” mode. Either way, the DAC output always counts in the proper direction to track the input signal. Notice how no shift register is needed to buffer the binary count at the end of a cycle. Since the counter’s output continuously tracks the input (rather than counting to meet the input and then resetting back to zero), the binary output is legitimately updated with every clock pulse. An advantage of this converter circuit is speed, since the counter never has to reset. Note the behavior of this circuit: Note the much faster update time than any of the other “counting” ADC circuits. Also note how at the very beginning of the plot where the counter had to “catch up” with the analog signal, the rate of change for the output was identical to that of the first counting ADC. Also, with no shift register in this circuit, the binary output would actually ramp up rather than jump from zero to an accurate count as it did with the counter and successive approximation ADC circuits. Perhaps the greatest drawback to this ADC design is the fact that the binary output is never stable: it always switches between counts with every clock pulse, even with a perfectly stable analog input signal. This phenomenon is informally known as bit bobble, and it can be problematic in some digital systems. This tendency can be overcome, though, through the creative use of a shift register. For example, the counter’s output may be latched through a parallel-in/parallel-out shift register only when the output changes by two or more steps. Building a circuit to detect two or more successive counts in the same direction takes a little ingenuity, but is worth the effort.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/13%3A_Digital-Analog_Conversion/13.05%3A_Digital_Ramp_ADC.txt
So far, we’ve only been able to escape the sheer volume of components in the flash converter by using a DAC as part of our ADC circuitry. However, this is not our only option. It is possible to avoid using a DAC if we substitute an analog ramping circuit and a digital counter with precise timing. The is the basic idea behind the so-called single-slope, or integrating ADC. Instead of using a DAC with a ramped output, we use an op-amp circuit called an integrator to generate a sawtooth waveform which is then compared against the analog input by a comparator. The time it takes for the sawtooth waveform to exceed the input signal voltage level is measured by means of a digital counter clocked with a precise-frequency square wave (usually from a crystal oscillator). The basic schematic diagram is shown here: The IGFET capacitor-discharging transistor scheme shown here is a bit oversimplified. In reality, a latching circuit timed with the clock signal would most likely have to be connected to the IGFET gate to ensure full discharge of the capacitor when the comparator’s output goes high. The basic idea, however, is evident in this diagram. When the comparator output is low (input voltage greater than integrator output), the integrator is allowed to charge the capacitor in a linear fashion. Meanwhile, the counter is counting up at a rate fixed by the precision clock frequency. The time it takes for the capacitor to charge up to the same voltage level as the input depends on the input signal level and the combination of -Vref, R, and C. When the capacitor reaches that voltage level, the comparator output goes high, loading the counter’s output into the shift register for a final output. The IGFET is triggered “on” by the comparator’s high output, discharging the capacitor back to zero volts. When the integrator output voltage falls to zero, the comparator output switches back to a low state, clearing the counter and enabling the integrator to ramp up voltage again. This ADC circuit behaves very much like the digital ramp ADC, except that the comparator reference voltage is a smooth sawtooth waveform rather than a “stairstep:” The single-slope ADC suffers all the disadvantages of the digital ramp ADC, with the added drawback of calibration drift. The accurate correspondence of this ADC’s output with its input is dependent on the voltage slope of the integrator being matched to the counting rate of the counter (the clock frequency). With the digital ramp ADC, the clock frequency had no effect on conversion accuracy, only on update time. In this circuit, since the rate of integration and the rate of count are independent of each other, variation between the two is inevitable as it ages, and will result in a loss of accuracy. The only good thing to say about this circuit is that it avoids the use of a DAC, which reduces circuit complexity. An answer to this calibration drift dilemma is found in a design variation called the dual-slope converter. In the dual-slope converter, an integrator circuit is driven positive and negative in alternating cycles to ramp down and then up, rather than being reset to 0 volts at the end of every cycle. In one direction of ramping, the integrator is driven by the positive analog input signal (producing a negative, variable rate of output voltage change, or output slope) for a fixed amount of time, as measured by a counter with a precision frequency clock. Then, in the other direction, with a fixed reference voltage (producing a fixed rate of output voltage change) with time measured by the same counter. The counter stops counting when the integrator’s output reaches the same voltage as it was when it started the fixed-time portion of the cycle. The amount of time it takes for the integrator’s capacitor to discharge back to its original output voltage, as measured by the magnitude accrued by the counter, becomes the digital output of the ADC circuit. The dual-slope method can be thought of analogously in terms of a rotary spring such as that used in a mechanical clock mechanism. Imagine we were building a mechanism to measure the rotary speed of a shaft. Thus, shaft speed is our “input signal” to be measured by this device. The measurement cycle begins with the spring in a relaxed state. The spring is then turned, or “wound up,” by the rotating shaft (input signal) for a fixed amount of time. This places the spring in a certain amount of tension proportional to the shaft speed: a greater shaft speed corresponds to a faster rate of winding. and a greater amount of spring tension accumulated over that period of time. After that, the spring is uncoupled from the shaft and allowed to unwind at a fixed rate, the time for it to unwind back to a relaxed state measured by a timer device. The amount of time it takes for the spring to unwind at that fixed rate will be directly proportional to the speed at which it was wound (input signal magnitude) during the fixed-time portion of the cycle. This technique of analog-to-digital conversion escapes the calibration drift problem of the single-slope ADC because both the integrator’s integration coefficient (or “gain”) and the counter’s rate of speed are in effect during the entire “winding” and “unwinding” cycle portions. If the counter’s clock speed were to suddenly increase, this would shorten the fixed time period where the integrator “winds up” (resulting in a lesser voltage accumulated by the integrator), but it would also mean that it would count faster during the period of time when the integrator was allowed to “unwind” at a fixed rate. The proportion that the counter is counting faster will be the same proportion as the integrator’s accumulated voltage is diminished from before the clock speed change. Thus, the clock speed error would cancel itself out and the digital output would be exactly what it should be. Another important advantage of this method is that the input signal becomes averaged as it drives the integrator during the fixed-time portion of the cycle. Any changes in the analog signal during that period of time have a cumulative effect on the digital output at the end of that cycle. Other ADC strategies merely “capture” the analog signal level at a single point in time every cycle. If the analog signal is “noisy” (contains significant levels of spurious voltage spikes/dips), one of the other ADC converter technologies may occasionally convert a spike or dip because it captures the signal repeatedly at a single point in time. A dual-slope ADC, on the other hand, averages together all the spikes and dips within the integration period, thus providing an output with greater noise immunity. Dual-slope ADCs are used in applications demanding high accuracy.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/13%3A_Digital-Analog_Conversion/13.08%3A_Slope_%28integrating%29_ADC.txt
One of the more advanced ADC technologies is the so-called delta-sigma, or ΔΣ (using the proper Greek letter notation). In mathematics and physics, the capital Greek letter delta (Δ) represents difference or change, while the capital letter sigma (Σ) represents summation: the adding of multiple terms together. Sometimes this converter is referred to by the same Greek letters in reverse order: sigma-delta, or ΣΔ. In a ΔΣ converter, the analog input voltage signal is connected to the input of an integrator, producing a voltage rate-of-change, or slope, at the output corresponding to input magnitude. This ramping voltage is then compared against ground potential (0 volts) by a comparator. The comparator acts as a sort of 1-bit ADC, producing 1 bit of output (“high” or “low”) depending on whether the integrator output is positive or negative. The comparator’s output is then latched through a D-type flip-flop clocked at a high frequency, and fed back to another input channel on the integrator, to drive the integrator in the direction of a 0 volt output. The basic circuit looks like this: The leftmost op-amp is the (summing) integrator. The next op-amp the integrator feeds into is the comparator, or 1-bit ADC. Next comes the D-type flip-flop, which latches the comparator’s output at every clock pulse, sending either a “high” or “low” signal to the next comparator at the top of the circuit. This final comparator is necessary to convert the single-polarity 0V / 5V logic level output voltage of the flip-flop into a +V / -V voltage signal to be fed back to the integrator. If the integrator output is positive, the first comparator will output a “high” signal to the D input of the flip-flop. At the next clock pulse, this “high” signal will be output from the Q line into the noninverting input of the last comparator. This last comparator, seeing an input voltage greater than the threshold voltage of 1/2 +V, saturates in a positive direction, sending a full +V signal to the other input of the integrator. This +V feedback signal tends to drive the integrator output in a negative direction. If that output voltage ever becomes negative, the feedback loop will send a corrective signal (-V) back around to the top input of the integrator to drive it in a positive direction. This is the delta-sigma concept in action: the first comparator senses a difference (Δ) between the integrator output and zero volts. The integrator sums (Σ) the comparator’s output with the analog input signal. Functionally, this results in a serial stream of bits output by the flip-flop. If the analog input is zero volts, the integrator will have no tendency to ramp either positive or negative, except in response to the feedback voltage. In this scenario, the flip-flop output will continually oscillate between “high” and “low,” as the feedback system “hunts” back and forth, trying to maintain the integrator output at zero volts: If, however, we apply a negative analog input voltage, the integrator will have a tendency to ramp its output in a positive direction. Feedback can only add to the integrator’s ramping by a fixed voltage over a fixed time, and so the bit stream output by the flip-flop will not be quite the same: By applying a larger (negative) analog input signal to the integrator, we force its output to ramp more steeply in the positive direction. Thus, the feedback system has to output more 1’s than before to bring the integrator output back to zero volts: As the analog input signal increases in magnitude, so does the occurrence of 1’s in the digital output of the flip-flop: A parallel binary number output is obtained from this circuit by averaging the serial stream of bits together. For example, a counter circuit could be designed to collect the total number of 1’s output by the flip-flop in a given number of clock pulses. This count would then be indicative of the analog input voltage. Variations on this theme exist, employing multiple integrator stages and/or comparator circuits outputting more than 1 bit, but one concept common to all ΔΣ converters is that of oversampling. Oversampling is when multiple samples of an analog signal are taken by an ADC (in this case, a 1-bit ADC), and those digitized samples are averaged. The end result is an effective increase in the number of bits resolved from the signal. In other words, an oversampled 1-bit ADC can do the same job as an 8-bit ADC with one-time sampling, albeit at a slower rate.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/13%3A_Digital-Analog_Conversion/13.09%3A_Delta-Sigma_ADC.txt
Perhaps the most important consideration of an ADC is its resolution. Resolution is the number of binary bits output by the converter. Because ADC circuits take in an analog signal, which is continuously variable, and resolve it into one of many discrete steps, it is important to know how many of these steps there are in total. For example, an ADC with a 10-bit output can represent up to 1024 (210) unique conditions of signal measurement. Over the range of measurement from 0% to 100%, there will be exactly 1024 unique binary numbers output by the converter (from 0000000000 to 1111111111, inclusive). An 11-bit ADC will have twice as many states to its output (2048, or 211), representing twice as many unique conditions of signal measurement between 0% and 100%. Resolution is very important in data acquisition systems (circuits designed to interpret and record physical measurements in electronic form). Suppose we were measuring the height of water in a 40-foot tall storage tank using an instrument with a 10-bit ADC. 0 feet of water in the tank corresponds to 0% of measurement, while 40 feet of water in the tank corresponds to 100% of measurement. Because the ADC is fixed at 10 bits of binary data output, it will interpret any given tank level as one out of 1024 possible states. To determine how much physical water level will be represented in each step of the ADC, we need to divide the 40 feet of measurement span by the number of steps in the 0-to-1024 range of possibilities, which is 1023 (one less than 1024). Doing this, we obtain a figure of 0.039101 feet per step. This equates to 0.46921 inches per step, a little less than half an inch of water level represented for every binary count of the ADC. This step value of 0.039101 feet (0.46921 inches) represents the smallest amount of tank level change detectable by the instrument. Admittedly, this is a small amount, less than 0.1% of the overall measurement span of 40 feet. However, for some applications it may not be fine enough. Suppose we needed this instrument to be able to indicate tank level changes down to one-tenth of an inch. In order to achieve this degree of resolution and still maintain a measurement span of 40 feet, we would need an instrument with more than ten ADC bits. To determine how many ADC bits are necessary, we need to first determine how many 1/10 inch steps there are in 40 feet. The answer to this is 40/(0.1/12), or 4800 1/10 inch steps in 40 feet. Thus, we need enough bits to provide at least 4800 discrete steps in a binary counting sequence. 10 bits gave us 1023 steps, and we knew this by calculating 2 to the power of 10 (210 = 1024) and then subtracting one. Following the same mathematical procedure, 211-1 = 2047, 212-1 = 4095, and 213-1 = 8191. 12 bits falls shy of the amount needed for 4800 steps, while 13 bits is more than enough. Therefore, we need an instrument with at least 13 bits of resolution. Another important consideration of ADC circuitry is its sample frequency, or conversion rate. This is simply the speed at which the converter outputs a new binary number. Like resolution, this consideration is linked to the specific application of the ADC. If the converter is being used to measure slow-changing signals such as level in a water storage tank, it could probably have a very slow sample frequency and still perform adequately. Conversely, if it is being used to digitize an audio frequency signal cycling at several thousand times per second, the converter needs to be considerably faster. Consider the following illustration of ADC conversion rate versus signal type, typical of a successive-approximation ADC with regular sample intervals: Here, for this slow-changing signal, the sample rate is more than adequate to capture its general trend. But consider this example with the same sample time: When the sample period is too long (too slow), substantial details of the analog signal will be missed. Notice how, especially in the latter portions of the analog signal, the digital output utterly fails to reproduce the true shape. Even in the first section of the analog waveform, the digital reproduction deviates substantially from the true shape of the wave. It is imperative that an ADC’s sample time is fast enough to capture essential changes in the analog waveform. In data acquisition terminology, the highest-frequency waveform that an ADC can theoretically capture is the so-called Nyquist frequency, equal to one-half of the ADC’s sample frequency. Therefore, if an ADC circuit has a sample frequency of 5000 Hz, the highest-frequency waveform it can successfully resolve will be the Nyquist frequency of 2500 Hz. If an ADC is subjected to an analog input signal whose frequency exceeds the Nyquist frequency for that ADC, the converter will output a digitized signal of falsely low frequency. This phenomenon is known as aliasing. Observe the following illustration to see how aliasing occurs: Note how the period of the output waveform is much longer (slower) than that of the input waveform, and how the two waveform shapes aren’t even similar: It should be understood that the Nyquist frequency is an absolute maximum frequency limit for an ADC, and does not represent the highest practical frequency measurable. To be safe, one shouldn’t expect an ADC to successfully resolve any frequency greater than one-fifth to one-tenth of its sample frequency. A practical means of preventing aliasing is to place a low-pass filter before the input of the ADC, to block any signal frequencies greater than the practical limit. This way, the ADC circuitry will be prevented from seeing any excessive frequencies and thus will not try to digitize them. It is generally considered better that such frequencies go unconverted than to have them be “aliased” and appear in the output as false signals. Yet another measure of ADC performance is something called step recovery. This is a measure of how quickly an ADC changes its output to match a large, sudden change in the analog input. In some converter technologies especially, step recovery is a serious limitation. One example is the tracking converter, which has a typically fast update period but a disproportionately slow step recovery. An ideal ADC has a great many bits for very fine resolution, samples at lightning-fast speeds, and recovers from steps instantly. It also, unfortunately, doesn’t exist in the real world. Of course, any of these traits may be improved through additional circuit complexity, either in terms of increased component count and/or special circuit designs made to run at higher clock speeds. Different ADC technologies, though, have different strengths. Here is a summary of them ranked from best to worst: Resolution/complexity ratio: Single-slope integrating, dual-slope integrating, counter, tracking, successive approximation, flash. Speed: Flash, tracking, successive approximation, single-slope integrating & counter, dual-slope integrating. Step recovery: Flash, successive-approximation, single-slope integrating & counter, dual-slope integrating, tracking. Please bear in mind that the rankings of these different ADC technologies depend on other factors. For instance, how an ADC rates on step recovery depends on the nature of the step change. A tracking ADC is equally slow to respond to all step changes, whereas a single-slope or counter ADC will register a high-to-low step change quicker than a low-to-high step change. Successive-approximation ADCs are almost equally fast at resolving any analog signal, but a tracking ADC will consistently beat a successive-approximation ADC if the signal is changing slower than one resolution step per clock pulse. I ranked integrating converters as having a greater resolution/complexity ratio than counter converters, but this assumes that precision analog integrator circuits are less complex to design and manufacture than precision DACs required within counter-based converters. Others may not agree with this assumption.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/13%3A_Digital-Analog_Conversion/13.10%3A_Practical_Considerations_of_ADC_Circuits.txt
In the design of large and complex digital systems, it is often necessary to have one device communicate digital information to and from other devices. One advantage of digital information is that it tends to be far more resistant to transmitted and interpreted errors than information symbolized in an analog medium. This accounts for the clarity of digitally-encoded telephone connections, compact audio disks, and for much of the enthusiasm in the engineering community for digital communications technology. However, digital communication has its own unique pitfalls, and there are multitudes of different and incompatible ways in which it can be sent. Hopefully, this chapter will enlighten you as to the basics of digital communication, its advantages, disadvantages, and practical considerations. Suppose we are given the task of remotely monitoring the level of a water storage tank. Our job is to design a system to measure the level of water in the tank and send this information to a distant location so that other people may monitor it. Measuring the tank’s level is quite easy, and can be accomplished with a number of different types of instruments, such as float switches, pressure transmitters, ultrasonic level detectors, capacitance probes, strain gauges, or radar level detectors. For the sake of this illustration, we will use an analog level-measuring device with an output signal of 4-20 mA. 4 mA represents a tank level of 0%, 20 mA represents a tank level of 100%, and anything in between 4 and 20 mA represents a tank level proportionately between 0% and 100%. If we wanted to, we could simply send this 4-20 milliamp analog current signal to the remote monitoring location by means of a pair of copper wires, where it would drive a panel meter of some sort, the scale of which was calibrated to reflect the depth of water in the tank, in whatever units of measurement preferred. This analog communication system would be simple and robust. For many applications, it would suffice for our needs perfectly. But, it is not the only way to get the job done. For the purposes of exploring digital techniques, we’ll explore other methods of monitoring this hypothetical tank, even though the analog method just described might be the most practical. The analog system, as simple as it may be, does have its limitations. One of them is the problem of analog signal interference. Since the tank’s water level is symbolized by the magnitude of DC current in the circuit, any “noise” in this signal will be interpreted as a change in the water level. With no noise, a plot of the current signal over time for a steady tank level of 50% would look like this: If the wires of this circuit are arranged too close to wires carrying 60 Hz AC power, for example, inductive and capacitive coupling may create a false “noise” signal to be introduced into this otherwise DC circuit. Although the low impedance of a 4-20 mA loop (250 Ω, typically) means that small noise voltages are significantly loaded (and thereby attenuated by the inefficiency of the capacitive/inductive coupling formed by the power wires), such noise can be significant enough to cause measurement problems: The above example is a bit exaggerated, but the concept should be clear: any electrical noise introduced into an analog measurement system will be interpreted as changes in the measured quantity. One way to combat this problem is to symbolize the tank’s water level by means of a digital signal instead of an analog signal. We can do this really crudely by replacing the analog transmitter device with a set of water level switches mounted at different heights on the tank: Each of these switches is wired to close a circuit, sending current to individual lamps mounted on a panel at the monitoring location. As each switch closed, its respective lamp would light, and whoever looked at the panel would see a 5-lamp representation of the tank’s level. Being that each lamp circuit is digital in nature—either 100% on or 100% off—electrical interference from other wires along the run have much less effect on the accuracy of measurement at the monitoring end than in the case of the analog signal. A huge amount of interference would be required to cause an “off” signal to be interpreted as an “on” signal or vice versa. Relative resistance to electrical interference is an advantage enjoyed by all forms of digital communication over analog. Now that we know digital signals are far more resistant to error induced by “noise,” let’s improve on this tank level measurement system. For instance, we could increase the resolution of this tank gauging system by adding more switches, for more precise determination of water level. Suppose we install 16 switches along the tank’s height instead of five. This would significantly improve our measurement resolution but at the expense of greatly increasing the quantity of wires needing to be strung between the tank and the monitoring location. One way to reduce this wiring expense would be to use a priority encoder to take the 16 switches and generate a binary number which represented the same information: Now, only 4 wires (plus any ground and power wires necessary) are needed to communicate the information, as opposed to 16 wires (plus any ground and power wires). At the monitoring location, we would need some kind of display device that could accept the 4-bit binary data and generate an easy-to-read display for a person to view. A decoder, wired to accept the 4-bit data as its input and light 1-of-16 output lamps, could be used for this task, or we could use a 4-bit decoder/driver circuit to drive some kind of numerical digit display. Still, a resolution of 1/16 tank height may not be good enough for our application. To better resolve the water level, we need more bits in our binary output. We could add still more switches, but this gets impractical rather quickly. A better option would be to re-attach our original analog transmitter to the tank and electronically convert its 4-20 milliamp analog output into a binary number with far more bits than would be practical using a set of discrete level switches. Since the electrical noise we’re trying to avoid is encountered along the long run of wire from the tank to the monitoring location, this A/D conversion can take place at the tank (where we have a “clean” 4-20 mA signal). There are a variety of methods to convert an analog signal to digital, but we’ll skip an in-depth discussion of those techniques and concentrate on the digital signal communication itself. The type of digital information being sent from our tank instrumentation to the monitoring instrumentation is referred to as parallel digital data. That is, each binary bit is being sent along its own dedicated wire, so that all bits arrive at their destination simultaneously. This obviously necessitates the use of at least one wire per bit to communicate with the monitoring location. We could further reduce our wiring needs by sending the binary data along a single channel (one wire + ground), so that each bit is communicated one at a time. This type of information is referred to as serial digital data. We could use a multiplexer or a shift register to take the parallel data from the A/D converter (at the tank transmitter), and convert it to serial data. At the receiving end (the monitoring location) we could use a demultiplexer or another shift register to convert the serial data to parallel again for use in the display circuitry. The exact details of how the mux/demux or shift register pairs are maintained in synchronization is, like A/D conversion, a topic for another lesson. Fortunately, there are digital IC chips called UARTs (Universal Asynchronous Receiver-Transmitters) that handle all these details on their own and make the designer’s life much simpler. For now, we must continue to focus our attention on the matter at hand: how to communicate the digital information from the tank to the monitoring location.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/14%3A_Digital_Communication/14.01%3A_Introduction_to_Digital_Communication.txt
This collection of wires that I keep referring to between the tank and the monitoring location can be called a bus or a network. The distinction between these two terms is more semantic than technical, and the two may be used interchangeably for all practical purposes. In my experience, the term “bus” is usually used in reference to a set of wires connecting digital components within the enclosure of a computer device, and “network” is for something that is physically more widespread. In recent years, however, the word “bus” has gained popularity in describing networks that specialize in interconnecting discrete instrumentation sensors over long distances (“Fieldbus” and “Profibus” are two examples). In either case, we are making reference to the means by which two or more digital devices are connected together so that data can be communicated between them. Names like “Fieldbus” or “Profibus” encompass not only the physical wiring of the bus or network, but also the specified voltage levels for communication, their timing sequences (especially for serial data transmission), connector pinout specifications, and all other distinguishing technical features of the network. In other words, when we speak of a certain type of bus or network by name, we’re actually speaking of a communications standard, roughly analogous to the rules and vocabulary of a written language. For example, before two or more people can become pen-pals, they must be able to write to one another in a common format. To merely have a mail system that is able to deliver their letters to each other is not enough. If they agree to write to each other in French, they agree to hold to the conventions of character set, vocabulary, spelling, and grammar that is specified by the standard of the French language. Likewise, if we connect two Profibus devices together, they will be able to communicate with each other only because the Profibus standard has specified such important details as voltage levels, timing sequences, etc. Simply having a set of wires strung between multiple devices is not enough to construct a working system (especially if the devices were built by different manufacturers!). To illustrate in detail, let’s design our own bus standard. Taking the crude water tank measurement system with five switches to detect varying levels of water, and using (at least) five wires to conduct the signals to their destination, we can lay the foundation for the mighty BogusBus: The physical wiring for the BogusBus consists of seven wires between the transmitter device (switches) and the receiver device (lamps). The transmitter consists of all components and wiring connections to the left of the leftmost connectors (the “—>>—” symbols). Each connector symbol represents a complementary male and female element. The bus wiring consists of the seven wires between the connector pairs. Finally, the receiver and all of its constituent wiring lies to the right of the rightmost connectors. Five of the network wires (labeled 1 through 5) carry the data while two of those wires (labeled +V and -V) provide connections for DC power supplies. There is a standard for the 7-pin connector plugs, as well. The pin layout is asymmetrical to prevent “backward” connection. In order for manufacturers to receive the awe-inspiring “BogusBus-compliant” certification on their products, they would have to comply with the specifications set by the designers of BogusBus (most likely another company, which designed the bus for a specific task and ended up marketing it for a wide variety of purposes). For instance, all devices must be able to use the 24 Volt DC supply power of BogusBus: the switch contacts in the transmitter must be rated for switching that DC voltage, the lamps must definitely be rated for being powered by that voltage, and the connectors must be able to handle it all. Wiring, of course, must be in compliance with that same standard: lamps 1 through 5, for example, must be wired to the appropriate pins so that when LS4 of Manufacturer XYZ’s transmitter closes, lamp 4 of Manufacturer ABC’s receiver lights up, and so on. Since both transmitter and receiver contain DC power supplies rated at an output of 24 Volts, all transmitter/receiver combinations (from all certified manufacturers) must have power supplies that can be safely wired in parallel. Consider what could happen if Manufacturer XYZ made a transmitter with the negative (-) side of their 24VDC power supply attached to earth ground and Manufacturer ABC made a receiver with the positive (+) side of their 24VDC power supply attached to earth ground. If both earth grounds are relatively “solid” (that is, a low resistance between them, such as might be the case if the two grounds were made on the metal structure of an industrial building), the two power supplies would short-circuit each other! BogusBus, of course, is a completely hypothetical and very impractical example of a digital network. It has incredibly poor data resolution, requires substantial wiring to connect devices, and communicates in only a single direction (from transmitter to receiver). It does, however, suffice as a tutorial example of what a network is and some of the considerations associated with network selection and operation. There are many types of buses and networks that you might come across in your profession. Each one has its own applications, advantages, and disadvantages. It is worthwhile to associate yourself with some of the “alphabet soup” that is used to label the various designs: Short-distance busses PC/AT Bus used in early IBM-compatible computers to connect peripheral devices such as disk drive and sound cards to the motherboard of the computer. PCI Another bus used in personal computers, but not limited to IBM-compatibles. Much faster than PC/AT. Typical data transfer rate of 100 Mbytes/second (32 bit) and 200 Mbytes/second (64 bit). PCMCIA A bus designed to connect peripherals to laptop and notebook sized personal computers. Has a very small physical “footprint,” but is considerably slower than other popular PC buses. VME A high-performance bus (co-designed by Motorola, and based on Motorola’s earlier Versa-Bus standard) for constructing versatile industrial and military computers, where multiple memory, peripheral, and even microprocessor cards could be plugged in to a passive “rack” or “card cage” to facilitate custom system designs. Typical data transfer rate of 50 Mbytes/second (64 bits wide). VXI Actually an expansion of the VME bus, VXI (VME eXtension for Instrumentation) includes the standard VME bus along with connectors for analog signals between cards in the rack. S-100 Sometimes called the Altair bus, this bus standard was the product of a conference in 1976, intended to serve as an interface to the Intel 8080 microprocessor chip. Similar in philosophy to the VME, where multiple function cards could be plugged in to a passive “rack,” facilitating the construction of custom systems. MC6800 The Motorola equivalent of the Intel-centric S-100 bus, designed to interface peripheral devices to the popular Motorola 6800 microprocessor chip. STD Stands for Simple-To-Design, and is yet another passive “rack” similar to the PC/AT bus, and lends itself well toward designs based on IBM-compatible hardware. Designed by Pro-Log, it is 8 bits wide (parallel), accommodating relatively small (4.5 inch by 6.5 inch) circuit cards. Multibus I and II Another bus intended for the flexible design of custom computer systems, designed by Intel. 16 bits wide (parallel). CompactPCI An industrial adaptation of the personal computer PCI standard, designed as a higher-performance alternative to the older VME bus. At a bus clock speed of 66 MHz, data transfer rates are 200 Mbytes/ second (32 bit) or 400 Mbytes/sec (64 bit). Microchannel Yet another bus, this one designed by IBM for their ill-fated PS/2 series of computers, intended for the interfacing of PC motherboards to peripheral devices. IDE A bus used primarily for connecting personal computer hard disk drives with the appropriate peripheral cards. Widely used in today’s personal computers for hard drive and CD-ROM drive interfacing. SCSI An alternative (technically superior to IDE) bus used for personal computer disk drives. SCSI stands for Small Computer System Interface. Used in some IBM-compatible PC’s, as well as Macintosh (Apple), and many mini and mainframe business computers. Used to interface hard drives, CD-ROM drives, floppy disk drives, printers, scanners, modems, and a host of other peripheral devices. Speeds up to 1.5 Mbytes per second for the original standard. GPIB (IEEE 488) General Purpose Interface Bus, also known as HPIB or IEEE 488, which was intended for the interfacing of electronic test equipment such as oscilloscopes and multimeters to personal computers. 8 bit wide address/data “path” with 8 additional lines for communications control. Centronics parallel Widely used on personal computers for interfacing printer and plotter devices. Sometimes used to interface with other peripheral devices, such as external ZIP (100 Mbyte floppy) disk drives and tape drives. USB Universal Serial Bus, which is intended to interconnect many external peripheral devices (such as keyboards, modems, mice, etc.) to personal computers. Long used on Macintosh PC’s, it is now being installed as new equipment on IBM-compatible machines. FireWire (IEEE 1394) A high-speed serial network capable of operating at 100, 200, or 400 Mbps with versatile features such as “hot swapping” (adding or removing devices with the power on) and flexible topology. Designed for high-performance personal computer interfacing. Bluetooth A radio-based communications network designed for office linking of computer devices. Provisions for data security designed into this network standard. Extended-distance networks 20 mA current loop Not to be confused with the common instrumentation 4-20 mA analog standard, this is a digital communications network based on interrupting a 20 mA (or sometimes 60 mA) current loop to represent binary data. Although the low impedance gives good noise immunity, it is susceptible to wiring faults (such as breaks) which would fail the entire network. RS-232C The most common serial network used in computer systems, often used to link peripheral devices such as printers and mice to a personal computer. Limited in speed and distance (typically 45 feet and 20 kbps, although higher speeds can be run with shorter distances). I’ve been able to run RS-232 reliably at speeds in excess of 100 kbps, but this was using a cable only 6 feet long! RS-232C is often referred to simply as RS-232 (no “C”). RS-422A/RS-485 Two serial networks designed to overcome some of the distance and versatility limitations of RS-232C. Used widely in industry to link serial devices together in electrically “noisy” plant environments. Much greater distance and speed limitations than RS-232C, typically over half a mile and at speeds approaching 10 Mbps. Ethernet (IEEE 802.3) A high-speed network which links computers and some types of peripheral devices together. “Normal” Ethernet runs at a speed of 10 million bits/second, and “Fast” Ethernet runs at 100 million bits/second. The slower (10 Mbps) Ethernet has been implemented in a variety of means on copper wire (thick coax = “10BASE5”, thin coax = “10BASE2”, twisted-pair = “10BASE-T”), radio, and on optical fiber (“10BASE-F”). The Fast Ethernet has also been implemented on a few different means (twisted-pair, 2 pair = 100BASE-TX; twisted-pair, 4 pair = 100BASE-T4; optical fiber = 100BASE-FX). Token ring Another high-speed network linking computer devices together, using a philosophy of communication that is much different from Ethernet, allowing for more precise response times from individual network devices, and greater immunity to network wiring damage. FDDI A very high-speed network exclusively implemented on fiber-optic cabling. Modbus/Modbus Plus Originally implemented by the Modicon corporation, a large maker of Programmable Logic Controllers (PLCs) for linking remote I/O (Input/Output) racks with a PLC processor. Still quite popular. Profibus Originally implemented by the Siemens corporation, another large maker of PLC equipment. Foundation Fieldbus A high-performance bus expressly designed to allow multiple process instruments (transmitters, controllers, valve positioners) to communicate with host computers and with each other. May ultimately displace the 4-20 mA analog signal as the standard means of interconnecting process control instrumentation in the future
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/14%3A_Digital_Communication/14.02%3A_Networks_and_Busses.txt
Buses and networks are designed to allow communication to occur between individual devices that are interconnected. The flow of information, or data, between nodes, can take a variety of forms: With simplex communication, all data flow is unidirectional: from the designated transmitter to the designated receiver. BogusBus is an example of simplex communication, where the transmitter sent information to the remote monitoring location, but no information is ever sent back to the water tank. If all we want to do is send information one-way, then simplex is just fine. Most applications, however, demand more: With duplex communication, the flow of information is bi-directional for each device. Duplex can be further divided into two sub-categories: Half-duplex communication may be likened to two tin cans on the ends of a single taut string: Either can may be used to transmit or receive, but not at the same time. Full-duplex communication is more like a true telephone, where two people can talk at the same time and hear one another simultaneously, the mouthpiece of one phone transmitting the the earpiece of the other, and vice versa. Full-duplex is often facilitated through the use of two separate channels or networks, with an individual set of wires for each direction of communication. It is sometimes accomplished by means of multiple-frequency carrier waves, especially in radio links, where one frequency is reserved for each direction of communication. 14.04: Electrical Signal Types With BogusBus, our signals were very simple and straightforward: each signal wire (1 through 5) carried a single bit of digital data, 0 Volts representing “off” and 24 Volts DC representing “on.” Because all the bits arrived at their destination simultaneously, we would call BogusBus a parallel network technology. If we were to improve the performance of BogusBus by adding binary encoding (to the transmitter end) and decoding (to the receiver end), so that more steps of resolution were available with fewer wires, it would still be a parallel network. If, however, we were to add a parallel-to-serial converter at the transmitter end and a serial-to-parallel converter at the receiver end, we would have something quite different. It is primarily with the use of serial technology that we are forced to invent clever ways to transmit data bits. Because serial data requires us to send all data bits through the same wiring channel from transmitter to receiver, it necessitates a potentially high frequency signal on the network wiring. Consider the following illustration: a modified BogusBus system is communicating digital data in parallel, binary-encoded form. Instead of 5 discrete bits like the original BogusBus, we’re sending 8 bits from transmitter to receiver. The A/D converter on the transmitter side generates a new output every second. That makes for 8 bits per second of data being sent to the receiver. For the sake of illustration, let’s say that the transmitter is bouncing between an output of 10101010 and 10101011 every update (once per second): Since only the least significant bit (Bit 1) is changing, the frequency on that wire (to ground) is only 1/2 Hertz. In fact, no matter what numbers are being generated by the A/D converter between updates, the frequency on any wire in this modified BogusBus network cannot exceed 1/2 Hertz, because that’s how fast the A/D updates its digital output. 1/2 Hertz is pretty slow, and should present no problems for our network wiring. On the other hand, if we used an 8-bit serial network, all data bits must appear on the single channel in sequence. And these bits must be output by the transmitter within the 1-second window of time between A/D converter updates. Therefore, the alternating digital output of 10101010 and 10101011 (once per second) would look something like this: The frequency of our BogusBus signal is now approximately 4 Hertz instead of 1/2 Hertz, an eightfold increase! While 4 Hertz is still fairly slow, and does not constitute an engineering problem, you should be able to appreciate what might happen if we were transmitting 32 or 64 bits of data per update, along with the other bits necessary for parity checking and signal synchronization, at an update rate of thousands of times per second! Serial data network frequencies start to enter the radio range, and simple wires begin to act as antennas, pairs of wires as transmission lines, with all their associated quirks due to inductive and capacitive reactances. What is worse, the signals that we’re trying to communicate along a serial network are of a square-wave shape, being binary bits of information. Square waves are peculiar things, being mathematically equivalent to an infinite series of sine waves of diminishing amplitude and increasing frequency. A simple square wave at 10 kHz is actually “seen” by the capacitance and inductance of the network as a series of multiple sine-wave frequencies which extend into the hundreds of kHz at significant amplitudes. What we receive at the other end of a long 2-conductor network won’t look like a clean square wave anymore, even under the best of conditions! When engineers speak of network bandwidth, they’re referring to the practical frequency limit of a network medium. In serial communication, bandwidth is a product of data volume (binary bits per transmitted “word”) and data speed (“words” per second). The standard measure of network bandwidth is bits per second, or bps. An obsolete unit of bandwidth known as the baud is sometimes falsely equated with bits per second, but is actually the measure of signal level changes per second. Many serial network standards use multiple voltage or current level changes to represent a single bit, and so for these applications bps and baud are not equivalent. The general BogusBus design, where all bits are voltages referenced to a common “ground” connection, is the worst-case situation for high-frequency square wave data communication. Everything will work well for short distances, where inductive and capacitive effects can be held to a minimum, but for long distances this method will surely be problematic: A robust alternative to the common ground signal method is the differential voltage method, where each bit is represented by the difference of voltage between a ground-isolated pair of wires, instead of a voltage between one wire and a common ground. This tends to limit the capacitive and inductive effects imposed upon each signal and the tendency for the signals to be corrupted due to outside electrical interference, thereby significantly improving the practical distance of a serial network: The triangular amplifier symbols represent differential amplifiers, which output a voltage signal between two wires, neither one electrically common with ground. Having eliminated any relation between the voltage signal and ground, the only significant capacitance imposed on the signal voltage is that existing between the two signal wires. Capacitance between a signal wire and a grounded conductor is of much less effect, because the capacitive path between the two signal wires via a ground connection is two capacitances in series (from signal wire #1 to ground, then from ground to signal wire #2), and series capacitance values are always less than any of the individual capacitances. Furthermore, any “noise” voltage induced between the signal wires and earth ground by an external source will be ignored, because that noise voltage will likely be induced on both signal wires in equal measure, and the receiving amplifier only responds to the differential voltage between the two signal wires, rather than the voltage between any one of them and earth ground. RS-232C is a prime example of a ground-referenced serial network, while RS-422A is a prime example of a differential voltage serial network. RS-232C finds popular application in office environments where there is little electrical interference and wiring distances are short. RS-422A is more widely used in industrial applications where longer wiring distances and greater potential for electrical interference from AC power wiring exists. However, a large part of the problem with digital network signals is the square-wave nature of such voltages, as was previously mentioned. If only we could avoid square waves all together, we could avoid many of their inherent difficulties in long, high-frequency networks. One way of doing this is to modulate a sine wave voltage signal with our digital data. “Modulation” means that magnitude of one signal has control over some aspect of another signal. Radio technology has incorporated modulation for decades now, in allowing an audio-frequency voltage signal to control either the amplitude (AM) or frequency (FM) of a much higher frequency “carrier” voltage, which is then send to the antenna for transmission. The frequency-modulation (FM) technique has found more use in digital networks than amplitude-modulation (AM), except that its referred to as Frequency Shift Keying (FSK). With simple FSK, sine waves of two distinct frequencies are used to represent the two binary states, 1 and 0: Due to the practical problems of getting the low/high frequency sine waves to begin and end at the zero crossover points for any given combination of 0’s and 1’s, a variation of FSK called phase-continuous FSK is sometimes used, where the consecutive combination of a low/high frequency represents one binary state and the combination of a high/low frequency represents the other. This also makes for a situation where each bit, whether it be 0 or 1, takes exactly the same amount of time to transmit along the network: With sine wave signal voltages, many of the problems encountered with square wave digital signals are minimized, although the circuitry required to modulate (and demodulate) the network signals is more complex and expensive.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/14%3A_Digital_Communication/14.03%3A_Data_Flow.txt
A modern alternative to sending (binary) digital information via electric voltage signals is to use optical (light) signals. Electrical signals from digital circuits (high/low voltages) may be converted into discrete optical signals (light or no light) with LEDs or solid-state lasers. Likewise, light signals can be translated back into electrical form through the use of photodiodes or phototransistors for introduction into the inputs of gate circuits. Transmitting digital information in optical form may be done in open air, simply by aiming a laser at a photodetector at a remote distance, but interference with the beam in the form of temperature inversion layers, dust, rain, fog, and other obstructions can present significant engineering problems: One way to avoid the problems of open-air optical data transmission is to send the light pulses down an ultra-pure glass fiber. Glass fibers will “conduct” a beam of light much as a copper wire will conduct electrons, with the advantage of completely avoiding all the associated problems of inductance, capacitance, and external interference plaguing electrical signals. Optical fibers keep the light beam contained within the fiber core by a phenomenon known as total internal reflectance. An optical fiber is composed of two layers of ultra-pure glass, each layer made of glass with a slightly different refractive index, or capacity to “bend” light. With one type of glass concentrically layered around a central glass core, light introduced into the central core cannot escape outside the fiber, but is confined to travel within the core: These layers of glass are very thin, the outer “cladding” typically 125 microns (1 micron = 1 millionth of a meter, or 10-6 meter) in diameter. This thinness gives the fiber considerable flexibility. To protect the fiber from physical damage, it is usually given a thin plastic coating, placed inside of a plastic tube, wrapped with kevlar fibers for tensile strength, and given an outer sheath of plastic similar to electrical wire insulation. Like electrical wires, optical fibers are often bundled together within the same sheath to form a single cable. Optical fibers exceed the data-handling performance of copper wire in almost every regard. They are totally immune to electromagnetic interference and have very high bandwidths. However, they are not without certain weaknesses. One weakness of optical fiber is a phenomenon known as microbending. This is where the fiber is bend around too small of a radius, causing light to escape the inner core, through the cladding: Not only does microbending lead to diminished signal strength due to the lost light, but it also constitutes a security weakness in that a light sensor intentionally placed on the outside of a sharp bend could intercept digital data transmitted over the fiber. Another problem unique to optical fiber is signal distortion due to multiple light paths, or modes, having different distances over the length of the fiber. When light is emitted by a source, the photons (light particles) do not all travel the exact same path. This fact is patently obvious in any source of light not conforming to a straight beam, but is true even in devices such as lasers. If the optical fiber core is large enough in diameter, it will support multiple pathways for photons to travel, each of these pathways having a slightly different length from one end of the fiber to the other. This type of optical fiber is called multimode fiber: A light pulse emitted by the LED taking a shorter path through the fiber will arrive at the detector sooner than light pulses taking longer paths. The result is distortion of the square-wave’s rising and falling edges, called pulse stretching. This problem becomes worse as the overall fiber length is increased: However, if the fiber core is made small enough (around 5 microns in diameter), light modes are restricted to a single pathway with one length. Fiber so designed to permit only a single mode of light is known as single-mode fiber. Because single-mode fiber escapes the problem of pulse stretching experienced in long cables, it is the fiber of choice for long-distance (several miles or more) networks. The drawback, of course, is that with only one mode of light, single-mode fibers do not conduct as much light as multimode fibers. Over long distances, this exacerbates the need for “repeater” units to boost light power. 14.06: Network Topology If we want to connect two digital devices with a network, we would have a kind of network known as “point-to-point:” For the sake of simplicity, the network wiring is symbolized as a single line between the two devices. In actuality, it may be a twisted pair of wires, a coaxial cable, an optical fiber, or even a seven-conductor BogusBus. Right now, we’re merely focusing on the “shape” of the network, technically known as its topology. If we want to include more devices (sometimes called nodes) on this network, we have several options of network configuration to choose from: Many network standards dictate the type of topology which is used, while others are more versatile. Ethernet, for example, is commonly implemented in a “bus” topology but can also be implemented in a “star” or “ring” topology with the appropriate interconnecting equipment. Other networks, such as RS-232C, are almost exclusively point-to-point; and token ring (as you might have guessed) is implemented solely in a ring topology. Different topologies have different pros and cons associated with them: Point-to-point Quite obviously the only choice for two nodes. Bus Very simple to install and maintain. Nodes can be easily added or removed with minimal wiring changes. On the other hand, the one bus network must handle all communication signals from all nodes. This is known as broadcast networking, and is analogous to a group of people talking to each other over a single telephone connection, where only one person can talk at a time (limiting data exchange rates), and everyone can hear everyone else when they talk (which can be a data security issue). Also, a break in the bus wiring can lead to nodes being isolated in groups. Star With devices known as “gateways” at branching points in the network, data flow can be restricted between nodes, allowing for private communication between specific groups of nodes. This addresses some of the speed and security issues of the simple bus topology. However, those branches could easily be cut off from the rest of the “star” network if one of the gateways were to fail. Can also be implemented with “switches” to connect individual nodes to a larger network on demand. Such a switched network is similar to the standard telephone system. Ring​​​​​​​​​​​​​​ This topology provides the best reliability with the least amount of wiring. Since each node has two connection points to the ring, a single break in any part of the ring doesn’t affect the integrity of the network. The devices, however, must be designed with this topology in mind. Also, the network must be interrupted to install or remove nodes. As with bus topology, ring networks are broadcast by nature. As you might suspect, two or more ring topologies may be combined to give the “best of both worlds” in a particular application. Quite often, industrial networks end up in this fashion over time, simply from engineers and technicians joining multiple networks together for the benefit of plant-wide information access.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/14%3A_Digital_Communication/14.05%3A_Optical_Data_Communication.txt
Aside from the issues of the physical network (signal types and voltage levels, connector pinouts, cabling, topology, etc.), there needs to be a standardized way in which communication is arbitrated between multiple nodes in a network, even if its as simple as a two-node, point-to-point system. When a node “talks” on the network, it is generating a signal on the network wiring, be it high and low DC voltage levels, some kind of modulated AC carrier wave signal, or even pulses of light in a fiber. Nodes that “listen” are simply measuring that applied signal on the network (from the transmitting node) and passively monitoring it. If two or more nodes “talk” at the same time, however, their output signals may clash (imagine two logic gates trying to apply opposite signal voltages to a single line on a bus!), corrupting the transmitted data. The standardized method by which nodes are allowed to transmit to the bus or network wiring is called a protocol. There are many different protocols for arbitrating the use of a common network between multiple nodes, and I’ll cover just a few here. However, its good to be aware of these few, and to understand why some work better for some purposes than others. Usually, a specific protocol is associated with a standardized type of network. This is merely another “layer” to the set of standards which are specified under the titles of various networks. The International Standards Organization (ISO) has specified a general architecture of network specifications in their DIS7498 model (applicable to most any digital network). Consisting of seven “layers,” this outline attempts to categorize all levels of abstraction necessary to communicate digital data. • Level 1: Physical Specifies electrical and mechanical details of communication: wire type, connector design, signal types and levels. • Level 2: Data link Defines formats of messages, how data is to be addressed, and error detection/correction techniques. • Level 3: Network Establishes procedures for encapsulation of data into “packets” for transmission and reception. • Level 4: Transport Among other things, the transport layer defines how complete data files are to be handled over a network. • Level 5: Session Organizes data transfer in terms of beginning and end of a specific transmission. Analogous to job control on a multitasking computer operating system. • Level 6: Presentation Includes definitions for character sets, terminal control, and graphics commands so that abstract data can be readily encoded and decoded between communicating devices. • Level 7: Application The end-user standards for generating and/or interpreting communicated data in its final form. In other words, the actual computer programs using the communicated data. Some established network protocols only cover one or a few of the DIS7498 levels. For example, the widely used RS-232C serial communications protocol really only addresses the first (“physical”) layer of this seven-layer model. Other protocols, such as the X-windows graphical client/server system developed at MIT for distributed graphic-user-interface computer systems, cover all seven layers. Different protocols may use the same physical layer standard. An example of this is the RS-422A and RS-485 protocols, both of which use the same differential-voltage transmitter and receiver circuitry, using the same voltage levels to denote binary 1’s and 0’s. On a physical level, these two communication protocols are identical. However, on a more abstract level the protocols are different: RS-422A is point-to-point only, while RS-485 supports a bus topology “multidrop” with up to 32 addressable nodes. Perhaps the simplest type of protocol is the one where there is only one transmitter, and all the other nodes are merely receivers. Such is the case for BogusBus, where a single transmitter generates the voltage signals impressed on the network wiring, and one or more receiver units (with 5 lamps each) light up in accord with the transmitter’s output. This is always the case with a simplex network: there’s only one talker, and everyone else listens! When we have multiple transmitting nodes, we must orchestrate their transmissions in such a way that they don’t conflict with one another. Nodes shouldn’t be allowed to talk when another node is talking, so we give each node the ability to “listen” and to refrain from talking until the network is silent. This basic approach is called Carrier Sense Multiple Access (CSMA), and there exists a few variations on this theme. Please note that CSMA is not a standardized protocol in itself, but rather a methodology that certain protocols follow. One variation is to simply let any node begin to talk as soon as the network is silent. This is analogous to a group of people meeting at a round table: anyone has the ability to start talking, so long as they don’t interrupt anyone else. As soon as the last person stops talking, the next person waiting to talk will begin. So, what happens when two or more people start talking at once? In a network, the simultaneous transmission of two or more nodes is called a collision. With CSMA/CD (CSMA/Collision Detection), the nodes that collide simply reset themselves with a random delay timer circuit, and the first one to finish its time delay tries to talk again. This is the basic protocol for the popular Ethernet network. Another variation of CSMA is CSMA/BA (CSMA/Bitwise Arbitration), where colliding nodes refer to pre-set priority numbers which dictate which one has permission to speak first. In other words, each node has a “rank” which settles any dispute over who gets to start talking first after a collision occurs, much like a group of people where dignitaries and common citizens are mixed. If a collision occurs, the dignitary is generally allowed to speak first and the common person waits afterward. In either of the two examples above (CSMA/CD and CSMA/BA), we assumed that any node could initiate a conversation so long as the network was silent. This is referred to as the “unsolicited” mode of communication. There is a variation called “solicited” mode for either CSMA/CD or CSMA/BA where the initial transmission is only allowed to occur when a designated master node requests (solicits) a reply. Collision detection (CD) or bitwise arbitration (BA) applies only to post-collision arbitration as multiple nodes respond to the master device’s request. An entirely different strategy for node communication is the Master/Slave protocol, where a single master device allots time slots for all the other nodes on the network to transmit, and schedules these time slots so that multiple nodes cannot collide. The master device addresses each node by name, one at a time, letting that node talk for a certain amount of time. When it is finished, the master addresses the next node, and so on, and so on. Yet another strategy is the Token-Passing protocol, where each node gets a turn to talk (one at a time), and then grants permission for the next node to talk when its done. Permission to talk is passed around from node to node as each one hands off the “token” to the next in sequential order. The token itself is not a physical thing: it is a series of binary 1’s and 0’s broadcast on the network, carrying a specific address of the next node permitted to talk. Although token-passing protocol is often associated with ring-topology networks, it is not restricted to any topology in particular. And when this protocol is implemented in a ring network, the sequence of token passing does not have to follow the physical connection sequence of the ring. Just as with topologies, multiple protocols may be joined together over different segments of a heterogeneous network, for maximum benefit. For instance, a dedicated Master/Slave network connecting instruments together on the manufacturing plant floor may be linked through a gateway device to an Ethernet network which links multiple desktop computer workstations together, one of those computer workstations acting as a gateway to link the data to an FDDI fiber network back to the plant’s mainframe computer. Each network type, topology, and protocol serves different needs and applications best, but through gateway devices, they can all share the same data. It is also possible to blend multiple protocol strategies into a new hybrid within a single network type. Such is the case for Foundation Fieldbus, which combines Master/Slave with a form of token-passing. A Link Active Scheduler (LAS) device sends scheduled “Compel Data” (CD) commands to query slave devices on the Fieldbus for time-critical information. In this regard, Fieldbus is a Master/Slave protocol. However, when there’s time between CD queries, the LAS sends out “tokens” to each of the other devices on the Fieldbus, one at a time, giving them opportunity to transmit any unscheduled data. When those devices are done transmitting their information, they return the token back to the LAS. The LAS also probes for new devices on the Fieldbus with a “Probe Node” (PN) message, which is expected to produce a “Probe Response” (PR) back to the LAS. The responses of devices back to the LAS, whether by PR message or returned token, dictate their standing on a “Live List” database which the LAS maintains. Proper operation of the LAS device is absolutely critical to the functioning of the Fieldbus, so there are provisions for redundant LAS operation by assigning “Link Master” status to some of the nodes, empowering them to become alternate Link Active Schedulers if the operating LAS fails. Other data communications protocols exist, but these are the most popular. I had the opportunity to work on an old (circa 1975) industrial control system made by Honeywell where a master device called the Highway Traffic Director, or HTD, arbitrated all network communications. What made this network interesting is that the signal sent from the HTD to all slave devices for permitting transmission was not communicated on the network wiring itself, but rather on sets of individual twisted-pair cables connecting the HTD with each slave device. Devices on the network were then divided into two categories: those nodes connected to the HTD which were allowed to initiate transmission, and those nodes not connected to the HTD which could only transmit in response to a query sent by one of the former nodes. Primitive and slow are the only fitting adjectives for this communication network scheme, but it functioned adequately for its time. 14.08: Practical considerations - Digital Communication A principal consideration for industrial control networks, where the monitoring and control of real-life processes must often occur quickly and at set times, is the guaranteed maximum communication time from one node to another. If you’re controlling the position of a nuclear reactor coolant valve with a digital network, you need to be able to guarantee that the valve’s network node will receive the proper positioning signals from the control computer at the right times. If not, very bad things could happen! The ability for a network to guarantee data “throughput” is called determinism. A deterministic network has a guaranteed maximum time delay for data transfer from node to node, whereas a non-deterministic network does not. The preeminent example of a non-deterministic network is Ethernet, where the nodes rely on random time-delay circuits to reset and re-attempt transmission after a collision. Being that a node’s transmission of data could be delayed indefinitely from a long series of re-sets and re-tries after repeated collisions, there is no guarantee that its data will ever get sent out to the network. Realistically though, the odds are so astronomically great that such a thing would happen that it is of little practical concern in a lightly-loaded network. Another important consideration, especially for industrial control networks, is network fault tolerance: that is, how susceptible is a particular network’s signaling, topology, and/or protocol to failures? We’ve already briefly discussed some of the issues surrounding topology, but protocol impacts reliability just as much. For example, a Master/Slave network, while being extremely deterministic (a good thing for critical controls), is entirely dependent upon the master node to keep everything going (generally a bad thing for critical controls). If the master node fails for any reason, none of the other nodes will be able to transmit any data at all, because they’ll never receive their alloted time slot permissions to do so, and the whole system will fail. A similar issue surrounds token-passing systems: what happens if the node holding the token were to fail before passing the token on to the next node? Some token-passing systems address this possibility by having a few designated nodes generate a new token if the network is silent for too long. This works fine if a node holding the token dies, but it causes problems if part of a network falls silent because a cable connection comes undone: the portion of the network that falls silent generates its own token after awhile, and you essentially are left with two smaller networks with one token that’s getting passed around each of them to sustain communication. Trouble occurs, however, if that cable connection gets plugged back in: those two segmented networks are joined in to one again, and now there’s two tokens being passed around one network, resulting in nodes’ transmissions colliding! There is no “perfect network” for all applications. The task of the engineer and technician is to know the application and know the operations of the network(s) available. Only then can efficient system design and maintenance become a reality.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/14%3A_Digital_Communication/14.07%3A_Network_Protocols.txt
Although many textbooks provide good introductions to digital memory technology, I intend to make this chapter unique in presenting both past and present technologies to some degree of detail. While many of these memory designs are obsolete, their foundational principles are still quite interesting and educational, and may even find re-application in the memory technologies of the future. The basic goal of digital memory is to provide a means to store and access binary data: sequences of 1’s and 0’s. The digital storage of information holds advantages over analog techniques much the same as digital communication of information holds advantages over analog communication. This is not to say that digital data storage is unequivocally superior to analog, but it does address some of the more common problems associated with analog techniques and thus finds immense popularity in both consumer and industrial applications. Digital data storage also complements digital computation technology well, and thus finds natural application in the world of computers. The most evident advantage of digital data storage is the resistance to corruption. Suppose that we were going to store a piece of data regarding the magnitude of a voltage signal by means of magnetizing a small chunk of magnetic material. Since many magnetic materials retain their strength of magnetization very well over time, this would be a logical media candidate for long-term storage of this particular data (in fact, this is precisely how audio and video tape technology works: thin plastic tape is impregnated with particles of iron-oxide material, which can be magnetized or demagnetized via the application of a magnetic field from an electromagnet coil. The data is then retrieved from the tape by moving the magnetized tape past another coil of wire, the magnetized spots on the tape inducing voltage in that coil, reproducing the voltage waveform initially used to magnetize the tape). If we represent an analog signal by the strength of magnetization on spots of the tape, the storage of data on the tape will be susceptible to the smallest degree of degradation of that magnetization. As the tape ages and the magnetization fades, the analog signal magnitude represented on the tape will appear to be less than what it was when we first recorded the data. Also, if any spurious magnetic fields happen to alter the magnetization on the tape, even if its only by a small amount, that altering of field strength will be interpreted upon re-play as an altering (or corruption) of the signal that was recorded. Since analog signals have infinite resolution, the smallest degree of change will have an impact on the integrity of the data storage. If we were to use that same tape and store the data in binary digital form, however, the strength of magnetization on the tape would fall into two discrete levels: “high” and “low,” with no valid in-between states. As the tape aged or was exposed to spurious magnetic fields, those same locations on the tape would experience slight alteration of magnetic field strength, but unless the alterations were extreme, no data corruption would occur upon re-play of the tape. By reducing the resolution of the signal impressed upon the magnetic tape, we’ve gained significant immunity to the kind of degradation and “noise” typically plaguing stored analog data. On the other hand, our data resolution would be limited to the scanning rate and the number of bits output by the A/D converter which interpreted the original analog signal, so the reproduction wouldn’t necessarily be “better” than with analog, merely more rugged. With the advanced technology of modern A/D’s, though, the tradeoff is acceptable for most applications. Also, by encoding different types of data into specific binary number schemes, digital storage allows us to archive a wide variety of information that is often difficult to encode in analog form. Text, for example, is represented quite easily with the binary ASCII code, seven bits for each character, including punctuation marks, spaces, and carriage returns. A wider range of text is encoded using the Unicode standard, in like manner. Any kind of numerical data can be represented using binary notation on digital media, and any kind of information that can be encoded in numerical form (which almost any kind can!) is storable, too. Techniques such as parity and checksum error detection can be employed to further guard against data corruption, in ways that analog does not lend itself to.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/15%3A_Digital_Storage_(Memory)/15.01%3A_Why_digital%3F.txt
When we store information in some kind of circuit or device, we not only need some way to store and retrieve it, but also to locate precisely where in the device that it is. Most, if not all, memory devices can be thought of as a series of mail boxes, folders in a file cabinet, or some other metaphor where information can be located in a variety of places. When we refer to the actual information being stored in the memory device, we usually refer to it as the data. The location of this data within the storage device is typically called the address, in a manner reminiscent of the postal service. With some types of memory devices, the address in which certain data is stored can be called up by means of parallel data lines in a digital circuit (we’ll discuss this in more detail later in this lesson). With other types of devices, data is addressed in terms of an actual physical location on the surface of some type of media (the tracks and sectors of circular computer disks, for instance). However, some memory devices such as magnetic tapes have a one-dimensional type of data addressing: if you want to play your favorite song in the middle of a cassette tape album, you have to fast-forward to that spot in the tape, arriving at the proper spot by means of trial-and-error, judging the approximate area by means of a counter that keeps track of tape position, and/or by the amount of time it takes to get there from the beginning of the tape. The access of data from a storage device falls roughly into two categories: random access and sequential access. Random access means that you can quickly and precisely address a specific data location within the device, and non-random simply means that you cannot. A vinyl record platter is an example of a random-access device: to skip to any song, you just position the stylus arm at whatever location on the record that you want (compact audio disks so the same thing, only they do it automatically for you). Cassette tape, on the other hand, is sequential. You have to wait to go past the other songs in sequence before you can access or address the song that you want to skip to. The process of storing a piece of data to a memory device is called writing, and the process of retrieving data is called reading. Memory devices allowing both reading and writing are equipped with a way to distinguish between the two tasks, so that no mistake is made by the user (writing new information to a device when all you wanted to do is see what was stored there). Some devices do not allow for the writing of new data, and are purchased “pre-written” from the manufacturer. Such is the case for vinyl records and compact audio disks, and this is typically referred to in the digital world as read-only memory, or ROM. Cassette audio and video tape, on the other hand, can be re-recorded (re-written) or purchased blank and recorded fresh by the user. This is often called read-write memory. Another distinction to be made for any particular memory technology is its volatility, or data storage permanence without power. Many electronic memory devices store binary data by means of circuits that are either latched in a “high” or “low” state, and this latching effect holds only as long as electric power is maintained to those circuits. Such memory would be properly referred to as volatile. Storage media such as magnetized disk or tape is nonvolatile, because no source of power is needed to maintain data storage. This is often confusing for new students of computer technology, because the volatile electronic memory typically used for the construction of computer devices is commonly and distinctly referred to as RAM(Random Access Memory). While RAM memory is typically randomly-accessed, so is virtually every other kind of memory device in the computer! What “RAM” really refers to is the volatility of the memory, and not its mode of access. Nonvolatile memory integrated circuits in personal computers are commonly (and properly) referred to as ROM (Read-Only Memory), but their data contents are accessed randomly, just like the volatile memory circuits! Finally, there needs to be a way to denote how much data can be stored by any particular memory device. This, fortunately for us, is very simple and straightforward: just count up the number of bits (or bytes, 1 byte = 8 bits) of total data storage space. Due to the high capacity of modern data storage devices, metric prefixes are generally affixed to the unit of bytes in order to represent storage space: 1.6 Gigabytes is equal to 1.6 billion bytes, or 12.8 billion bits, of data storage capacity. The only caveat here is to be aware of rounded numbers. Because the storage mechanisms of many random-access memory devices are typically arranged so that the number of “cells” in which bits of data can be stored appears in binary progression (powers of 2), a “one kilobyte” memory device most likely contains 1024 (2 to the power of 10) locations for data bytes rather than exactly 1000. A “64 kbyte” memory device actually holds 65,536 bytes of data (2 to the 16th power), and should probably be called a “66 Kbyte” device to be more precise. When we round numbers in our base-10 system, we fall out of step with the round equivalents in the base-2 system.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/15%3A_Digital_Storage_(Memory)/15.02%3A_Digital_Memory_Terms_and_Concepts.txt
Now we can proceed to studying specific types of digital storage devices. To start, I want to explore some of the technologies which do not require any moving parts. These are not necessarily the newest technologies, as one might suspect, although they will most likely replace moving-part technologies in the future. A very simple type of electronic memory is the bistable multivibrator. Capable of storing a single bit of data, it is volatile (requiring power to maintain its memory) and very fast. The D-latch is probably the simplest implementation of a bistable multivibrator for memory usage, the D input serving as the data “write” input, the Q output serving as the “read” output, and the enable input serving as the read/write control line: If we desire more than one bit’s worth of storage (and we probably do), we’ll have to have many latches arranged in some kind of an array where we can selectively address which one (or which set) we’re reading from or writing to. Using a pair of tristate buffers, we can connect both the data write input and the data read output to a common data bus line, and enable those buffers to either connect the Q output to the data line (READ), connect the D input to the data line (WRITE), or keep both buffers in the High-Z state to disconnect D and Q from the data line (unaddressed mode). One memory “cell” would look like this, internally: When the address enable input is 0, both tristate buffers will be placed in high-Z mode, and the latch will be disconnected from the data input/output (bus) line. Only when the address enable input is active (1) will the latch be connected to the data bus. Every latch circuit, of course, will be enabled with a different “address enable” (AE) input line, which will come from a 1-of-n output decoder: In the above circuit, 16 memory cells are individually addressed with a 4-bit binary code input into the decoder. If a cell is not addressed, it will be disconnected from the 1-bit data bus by its internal tristate buffers: consequently, data cannot be either written or read through the bus to or from that cell. Only the cell circuit that is addressed by the 4-bit decoder input will be accessible through the data bus. This simple memory circuit is random-access and volatile. Technically, it is known as a static RAM. Its total memory capacity is 16 bits. Since it contains 16 addresses and has a data bus that is 1 bit wide, it would be designated as a 16 x 1 bit static RAM circuit. As you can see, it takes an incredible number of gates (and multiple transistors per gate!) to construct a practical static RAM circuit. This makes the static RAM a relatively low-density device, with less capacity than most other types of RAM technology per unit IC chip space. Because each cell circuit consumes a certain amount of power, the overall power consumption for a large array of cells can be quite high. Early static RAM banks in personal computers consumed a fair amount of power and generated a lot of heat, too. CMOS IC technology has made it possible to lower the specific power consumption of static RAM circuits, but low storage density is still an issue. To address this, engineers turned to the capacitor instead of the bistable multivibrator as a means of storing binary data. A tiny capacitor could serve as a memory cell, complete with a single MOSFET transistor for connecting it to the data bus for charging (writing a 1), discharging (writing a 0), or reading. Unfortunately, such tiny capacitors have very small capacitances, and their charge tends to “leak” away through any circuit impedances quite rapidly. To combat this tendency, engineers designed circuits internal to the RAM memory chip which would periodically read all cells and recharge (or “refresh”) the capacitors as needed. Although this added to the complexity of the circuit, it still required far less componentry than a RAM built of multivibrators. They called this type of memory circuit a dynamic RAM, because of its need of periodic refreshing. Recent advances in IC chip manufacturing has led to the introduction of flash memory, which works on a capacitive storage principle like the dynamic RAM, but uses the insulated gate of a MOSFET as the capacitor itself. Before the advent of transistors (especially the MOSFET), engineers had to implement digital circuitry with gates constructed from vacuum tubes. As you can imagine, the enormous comparative size and power consumption of a vacuum tube as compared to a transistor made memory circuits like static and dynamic RAM a practical impossibility. Other, rather ingenious, techniques to store digital data without the use of moving parts were developed.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/15%3A_Digital_Storage_(Memory)/15.03%3A_Modern_Nonmechanical_Memory.txt
Perhaps the most ingenious technique was that of the delay line. A delay line is any kind of device which delays the propagation of a pulse or wave signal. If you’ve ever heard a sound echo back and forth through a canyon or cave, you’ve experienced an audio delay line: the noise wave travels at the speed of sound, bouncing off of walls and reversing direction of travel. The delay line “stores” data on a very temporary basis if the signal is not strengthened periodically, but the very fact that it stores data at all is a phenomenon exploitable for memory technology. Early computer delay lines used long tubes filled with liquid mercury, which was used as the physical medium through which sound waves traveled along the length of the tube. An electrical/sound transducer was mounted at each end, one to create sound waves from electrical impulses, and the other to generate electrical impulses from sound waves. A stream of serial binary data was sent to the transmitting transducer as a voltage signal. The sequence of sound waves would travel from left to right through the mercury in the tube and be received by the transducer at the other end. The receiving transducer would receive the pulses in the same order as they were transmitted: A feedback circuit connected to the receiving transducer would drive the transmitting transducer again, sending the same sequence of pulses through the tube as sound waves, storing the data as long as the feedback circuit continued to function. The delay line functioned like a first-in-first-out (FIFO) shift register, and external feedback turned that shift register behavior into a ring counter, cycling the bits around indefinitely. The delay line concept suffered numerous limitations from the materials and technology that were then available. The EDVAC computer of the early 1950’s used 128 mercury-filled tubes, each one about 5 feet long and storing a maximum of 384 bits. Temperature changes would affect the speed of sound in the mercury, thus skewing the time delay in each tube and causing timing problems. Later designs replaced the liquid mercury medium with solid rods of glass, quartz, or special metal that delayed torsional (twisting) waves rather than longitudinal (lengthwise) waves, and operated at much higher frequencies. One such delay line used a special nickel-iron-titanium wire (chosen for its good temperature stability) about 95 feet in length, coiled to reduce the overall package size. The total delay time from one end of the wire to the other was about 9.8 milliseconds, and the highest practical clock frequency was 1 MHz. This meant that approximately 9800 bits of data could be stored in the delay line wire at any given time. Given different means of delaying signals which wouldn’t be so susceptible to environmental variables (such as serial pulses of light within a long optical fiber), this approach might someday find re-application. Another approach experimented with by early computer engineers was the use of a cathode ray tube (CRT), the type commonly used for oscilloscope, radar, and television viewscreens, to store binary data. Normally, the focused and directed electron beam in a CRT would be used to make bits of phosphor chemical on the inside of the tube glow, thus producing a viewable image on the screen. In this application, however, the desired result was the creation of an electric charge on the glass of the screen by the impact of the electron beam, which would then be detected by a metal grid placed directly in front of the CRT. Like the delay line, the so-called Williams Tube memory needed to be periodically refreshed with external circuitry to retain its data. Unlike the delay line mechanisms, it was virtually immune to the environmental factors of temperature and vibration. The IBM model 701 computer sported a Williams Tube memory with 4 Kilobyte capacity and a bad habit of “overcharging” bits on the tube screen with successive re-writes so that false “1” states might overflow to adjacent spots on the screen. The next major advance in computer memory came when engineers turned to magnetic materials as a means of storing binary data. It was discovered that certain compounds of iron, namely “ferrite,” possessed hysteresis curves that were almost square: Shown on a graph with the strength of the applied magnetic field on the horizontal axis (field intensity), and the actual magnetization (orientation of electron spins in the ferrite material) on the vertical axis (flux density), ferrite won’t become magnetized one direction until the applied field exceeds a critical threshold value. Once that critical value is exceeded, the electrons in the ferrite “snap” into magnetic alignment and the ferrite becomes magnetized. If the applied field is then turned off, the ferrite maintains full magnetism. To magnetize the ferrite in the other direction (polarity), the applied magnetic field must exceed the critical value in the opposite direction. Once that critical value is exceeded, the electrons in the ferrite “snap” into magnetic alignment in the opposite direction. Once again, if the applied field is then turned off, the ferrite maintains full magnetism. To put it simply, the magnetization of a piece of ferrite is “bistable.” Exploiting this strange property of ferrite, we can use this natural magnetic “latch” to store a binary bit of data. To set or reset this “latch,” we can use electric current through a wire or coil to generate the necessary magnetic field, which will then be applied to the ferrite. Jay Forrester of MIT applied this principle in inventing the magnetic “core” memory, which became the dominant computer memory technology during the 1970’s. A grid of wires, electrically insulated from one another, crossed through the center of many ferrite rings, each of which being called a “core.” As DC current moved through any wire from the power supply to ground, a circular magnetic field was generated around that energized wire. The resistor values were set so that the amount of current at the regulated power supply voltage would produce slightly more than 1/2 the critical magnetic field strength needed to magnetize any one of the ferrite rings. Therefore, if column #4 wire was energized, all the cores on that column would be subjected to the magnetic field from that one wire, but it would not be strong enough to change the magnetization of any of those cores. However, if column #4 wire and row #5 wire were both energized, the core at that intersection of column #4 and row #5 would be subjected to a sum of those two magnetic fields: a magnitude strong enough to “set” or “reset” the magnetization of that core. In other words, each core was addressed by the intersection of row and column. The distinction between “set” and “reset” was the direction of the core’s magnetic polarity, and that bit value of data would be determined by the polarity of the voltages (with respect to ground) that the row and column wires would be energized with. The following photograph shows a core memory board from a Data General brand, “Nova” model computer, circa late 1960’s or early 1970’s. It had a total storage capacity of 4 kbytes (that’s kilobytes, not megabytes!). A ball-point pen is shown for size comparison: The electronic components seen around the periphery of this board are used for “driving” the column and row wires with current, and also to read the status of a core. A close-up photograph reveals the ring-shaped cores, through which the matrix wires thread. Again, a ball-point pen is shown for size comparison: A core memory board of later design (circa 1971) is shown in the next photograph. Its cores are much smaller and more densely packed, giving more memory storage capacity than the former board (8 kbytes instead of 4 kbytes): And, another close-up of the cores: Writing data to core memory was easy enough, but reading that data was a bit of a trick. To facilitate this essential function, a “read” wire was threaded through all the cores in a memory matrix, one end of it being grounded and the other end connected to an amplifier circuit. A pulse of voltage would be generated on this “read” wire if the addressed core changed states (from 0 to 1, or 1 to 0). In other words, to read a core’s value, you had to write either a 1 or a 0 to that core and monitor the voltage induced on the read wire to see if the core changed. Obviously, if the core’s state was changed, you would have to re-set it back to its original state, or else the data would have been lost. This process is known as a destructive read, because data may be changed (destroyed) as it is read. Thus, refreshing is necessary with core memory, although not in every case (that is, in the case of the core’s state not changing when either a 1 or a 0 was written to it). One major advantage of core memory over delay lines and Williams Tubes was nonvolatility. The ferrite cores maintained their magnetization indefinitely, with no power or refreshing required. It was also relatively easy to build, denser, and physically more rugged than any of its predecessors. Core memory was used from the 1960’s until the late 1970’s in many computer systems, including the computers used for the Apollo space program, CNC machine tool control computers, business (“mainframe”) computers, and industrial control systems. Despite the fact that core memory is long obsolete, the term “core” is still used sometimes with reference to a computer’s RAM memory. All the while that delay lines, Williams Tube, and core memory technologies were being invented, the simple static RAM was being improved with smaller active component (vacuum tube or transistor) technology. Static RAM was never totally eclipsed by its competitors: even the old ENIAC computer of the 1950’s used vacuum tube ring-counter circuitry for data registers and computation. Eventually though, smaller and smaller scale IC chip manufacturing technology gave transistors the practical edge over other technologies, and core memory became a museum piece in the 1980’s. One last attempt at a magnetic memory better than core was the bubble memory. Bubble memory took advantage of a peculiar phenomenon in a mineral called garnet, which, when arranged in a thin film and exposed to a constant magnetic field perpendicular to the film, supported tiny regions of oppositely-magnetized “bubbles” that could be nudged along the film by prodding with other external magnetic fields. “Tracks” could be laid on the garnet to focus the movement of the bubbles by depositing magnetic material on the surface of the film. A continuous track was formed on the garnet which gave the bubbles a long loop in which to travel, and motive force was applied to the bubbles with a pair of wire coils wrapped around the garnet and energized with a 2-phase voltage. Bubbles could be created or destroyed with a tiny coil of wire strategically placed in the bubbles’ path. The presence of a bubble represented a binary “1” and the absence of a bubble represented a binary “0.” Data could be read and written in this chain of moving magnetic bubbles as they passed by the tiny coil of wire, much the same as the read/write “head” in a cassette tape player, reading the magnetization of the tape as it moves. Like core memory, bubble memory was nonvolatile: a permanent magnet supplied the necessary background field needed to support the bubbles when the power was turned off. Unlike core memory, however, bubble memory had phenomenal storage density: millions of bits could be stored on a chip of garnet only a couple of square inches in size. What killed bubble memory as a viable alternative to static and dynamic RAM was its slow, sequential data access. Being nothing more than an incredibly long serial shift register (ring counter), access to any particular portion of data in the serial string could be quite slow compared to other memory technologies. An electrostatic equivalent of the bubble memory is the Charge-Coupled Device (CCD) memory, an adaptation of the CCD devices used in digital photography. Like bubble memory, the bits are serially shifted along channels on the substrate material by clock pulses. Unlike bubble memory, the electrostatic charges decay and must be refreshed. CCD memory is therefore volatile, with high storage density and sequential access. Interesting, isn’t it? The old Williams Tube memory was adapted from CRT viewing technology, and CCD memory from video recording technology.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/15%3A_Digital_Storage_(Memory)/15.04%3A_Historical%2C_Nonmechanical_Memory_Technologies.txt
Read-only memory (ROM) is similar in design to static or dynamic RAM circuits, except that the “latching” mechanism is made for one-time (or limited) operation. The simplest type of ROM is that which uses tiny “fuses” which can be selectively blown or left alone to represent the two binary states. Obviously, once one of the little fuses is blown, it cannot be made whole again, so the writing of such ROM circuits is one-time only. Because it can be written (programmed) once, these circuits are sometimes referred to as PROMs (Programmable Read-Only Memory). However, not all writing methods are as permanent as blown fuses. If a transistor latch can be made which is resettable only with significant effort, a memory device that’s something of a cross between a RAM and a ROM can be built. Such a device is given a rather oxymoronic name: the EPROM (Erasable Programmable Read-Only Memory). EPROMs come in two basic varieties: Electrically-erasable (EEPROM) and Ultraviolet-erasable (UV/EPROM). Both types of EPROMs use capacitive charge MOSFET devices to latch on or off. UV/EPROMs are “cleared” by long-term exposure to ultraviolet light. They are easy to identify: they have a transparent glass window which exposes the silicon chip material to light. Once programmed, you must cover that glass window with tape to prevent ambient light from degrading the data over time. EPROMs are often programmed using higher signal voltages than what is used during “read-only” mode. 15.06: Memory with moving parts- “Drives” The earliest forms of digital data storage involving moving parts was that of the punched paper card. Joseph Marie Jacquard invented a weaving loom in 1780 which automatically followed weaving instructions set by carefully placed holes in paper cards. This same technology was adapted to electronic computers in the 1950’s, with the cards being read mechanically (metal-to-metal contact through the holes), pneumatically (air blown through the holes, the presence of a hole sensed by air nozzle backpressure), or optically (light shining through the holes). An improvement over paper cards is the paper tape, still used in some industrial environments (notably the CNC machine tool industry), where data storage and speed demands are low and ruggedness is highly valued. Instead of wood-fiber paper, mylar material is often used, with optical reading of the tape being the most popular method. Magnetic tape (very similar to audio or video cassette tape) was the next logical improvement in storage media. It is still widely used today, as a means to store “backup” data for archiving and emergency restoration for other, faster methods of data storage. Like paper tape, magnetic tape is sequential access, rather than random access. In early home computer systems, regular audio cassette tape was used to store data in modulated form, the binary 1’s and 0’s represented by different frequencies (similar to FSK data communication). Access speed was terribly slow (if you were reading ASCII text from the tape, you could almost keep up with the pace of the letters appearing on the computer’s screen!), but it was cheap and fairly reliable. Tape suffered the disadvantage of being sequential access. To address this weak point, magnetic storage “drives” with disk- or drum-shaped media were built. An electric motor provided constant-speed motion. A movable read/write coil (also known as a “head”) was provided which could be positioned via servo-motors to various locations on the height of the drum or the radius of the disk, giving access that is almost random (you might still have to wait for the drum or disk to rotate to the proper position once the read/write coil has reached the right location). The disk shape lent itself best to portable media, and thus the floppy disk was born. Floppy disks (so-called because the magnetic media is thin and flexible) were originally made in 8-inch diameter formats. Later, the 5-1/4 inch variety was introduced, which was made practical by advances in media particle density. All things being equal, a larger disk has more space upon which to write data. However, storage density can be improved by making the little grains of iron-oxide material on the disk substrate smaller. Today, the 3-1/2 inch floppy disk is the preeminent format, with a capacity of 1.44 Mbytes (2.88 Mbytes on SCSI drives). Other portable drive formats are becoming popular, with IoMega’s 100 Mbyte “ZIP” and 1 Gbyte “JAZ” disks appearing as original equipment on some personal computers. Still, floppy drives have the disadvantage of being exposed to harsh environments, being constantly removed from the drive mechanism which reads, writes, and spins the media. The first disks were enclosed units, sealed from all dust and other particulate matter, and were definitely not portable. Keeping the media in an enclosed environment allowed engineers to avoid dust altogether, as well as spurious magnetic fields. This, in turn, allowed for much closer spacing between the head and the magnetic material, resulting in a much tighter-focused magnetic field to write data to the magnetic material. The following photograph shows a hard disk drive “platter” of approximately 30 Mbytes storage capacity. A ball-point pen has been set near the bottom of the platter for size reference: Modern disk drives use multiple platters made of hard material (hence the name, “hard drive”) with multiple read/write heads for every platter. The gap between head and platter is much smaller than the diameter of a human hair. If the hermetically-sealed environment inside a hard disk drive is contaminated with outside air, the hard drive will be rendered useless. Dust will lodge between the heads and the platters, causing damage to the surface of the media. Here is a hard drive with four platters, although the angle of the shot only allows viewing of the top platter. This unit is complete with drive motor, read/write heads, and associated electronics. It has a storage capacity of 340 Mbytes, and is about the same length as the ball-point pen shown in the previous photograph: While it is inevitable that non-moving-part technology will replace mechanical drives in the future, current state-of-the-art electromechanical drives continue to rival “solid-state” nonvolatile memory devices in storage density, and at a lower cost. In 1998, a 250 Mbyte hard drive was announced that was approximately the size of a quarter (smaller than the metal platter hub in the center of the last hard disk photograph)! In any case, storage density and reliability will undoubtedly continue to improve. An incentive for digital data storage technology advancement was the advent of digitally encoded music. A joint venture between Sony and Phillips resulted in the release of the “compact audio disc” (CD) to the public in the late 1980’s. This technology is a read-only type, the media being a transparent plastic disc backed by a thin film of aluminum. Binary bits are encoded as pits in the plastic which vary the path length of a low-power laser beam. Data is read by the low-power laser (the beam of which can be focused more precisely than normal light) reflecting off the aluminum to a photocell receiver. The advantages of CDs over magnetic tape are legion. Being digital, the information is highly resistant to corruption. Being non-contact in operation, there is no wear incurred through playing. Being optical, they are immune to magnetic fields (which can easily corrupt data on magnetic tape or disks). It is possible to purchase CD “burner” drives which contain the high-power laser necessary to write to a blank disc. Following on the heels of the music industry, the video entertainment industry has leveraged the technology of optical storage with the introduction of the Digital Video Disc, or DVD. Using a similar-sized plastic disc as the music CD, a DVD employs closer spacing of pits to achieve much greater storage density. This increased density allows feature-length movies to be encoded on DVD media, complete with trivia information about the movie, director’s notes, and so on. Much effort is being directed toward the development of a practical read/write optical disc (CD-W). Success has been found in using chemical substances whose color may be changed through exposure to bright laser light, then “read” by lower-intensity light. These optical discs are immediately identified by their characteristically colored surfaces, as opposed to the silver-colored underside of a standard CD.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/15%3A_Digital_Storage_(Memory)/15.05%3A_Read-Only_Memory_%28ROM%29.txt
Suppose we wanted to build a device that could add two binary bits together. Such a device is known as a half-adder, and its gate circuit looks like this: The Σ symbol represents the “sum” output of the half-adder, the sum’s least significant bit (LSB). Coutrepresents the “carry” output of the half-adder, the sum’s most significant bit (MSB). If we were to implement this same function in ladder (relay) logic, it would look like this: Either circuit is capable of adding two binary digits together. The mathematical “rules” of how to add bits together are intrinsic to the hard-wired logic of the circuits. If we wanted to perform a different arithmetic operation with binary bits, such as multiplication, we would have to construct another circuit. The above circuit designs will only perform one function: add two binary bits together. To make them do something else would take re-wiring, and perhaps different componentry. In this sense, digital arithmetic circuits aren’t much different from analog arithmetic (operational amplifier) circuits: they do exactly what they’re wired to do, no more and no less. We are not, however, restricted to designing digital computer circuits in this manner. It is possible to embed the mathematical “rules” for any arithmetic operation in the form of digital data rather than in hard-wired connections between gates. The result is unparalleled flexibility in operation, giving rise to a whole new kind of digital device: the programmable computer. While this chapter is by no means exhaustive, it provides what I believe is a unique and interesting look at the nature of programmable computer devices, starting with two devices often overlooked in introductory textbooks: look-up table memories and finite-state machines. 16.02: Look-up Tables Having learned about digital memory devices in the last chapter, we know that it is possible to store binary data within solid-state devices. Those storage “cells” within solid-state memory devices are easily addressed by driving the “address” lines of the device with the proper binary value(s). Suppose we had a ROM memory circuit written, or programmed, with certain data, such that the address lines of the ROM served as inputs and the data lines of the ROM served as outputs, generating the characteristic response of a particular logic function. Theoretically, we could program this ROM chip to emulate whatever logic function we wanted without having to alter any wire connections or gates. Consider the following example of a 4 x 2 bit ROM memory (a very small memory!) programmed with the functionality of a half adder: If this ROM has been written with the above data (representing a half-adder’s truth table), driving the A and B address inputs will cause the respective memory cells in the ROM chip to be enabled, thus outputting the corresponding data as the Σ (Sum) and Cout bits. Unlike the half-adder circuit built of gates or relays, this device can be set up to perform any logic function at all with two inputs and two outputs, not just the half-adder function. To change the logic function, all we would need to do is write a different table of data to another ROM chip. We could even use an EPROM chip which could be re-written at will, giving the ultimate flexibility in function. It is vitally important to recognize the significance of this principle as applied to digital circuitry. Whereas the half-adder built from gates or relays processes the input bits to arrive at a specific output, the ROM simply remembers what the outputs should be for any given combination of inputs. This is not much different from the “times tables” memorized in grade school: rather than having to calculate the product of 5 times 6 (5 + 5 + 5 + 5 + 5 + 5 = 30), school-children are taught to remember that 5 x 6 = 30, and then expected to recall this product from memory as needed. Likewise, rather than the logic function depending on the functional arrangement of hard-wired gates or relays (hardware), it depends solely on the data written into the memory (software). Such a simple application, with definite outputs for every input, is called a look-up table, because the memory device simply “looks up” what the output(s) should to be for any given combination of inputs states. This application of a memory device to perform logical functions is significant for several reasons: • Software is much easier to change than hardware. • Software can be archived on various kinds of memory media (disk, tape), thus providing an easy way to document and manipulate the function in a “virtual” form; hardware can only be “archived” abstractly in the form of some kind of graphical drawing. • Software can be copied from one memory device (such as the EPROM chip) to another, allowing the ability for one device to “learn” its function from another device. • Software such as the logic function example can be designed to perform functions that would be extremely difficult to emulate with discrete logic gates (or relays!). The usefulness of a look-up table becomes more and more evident with increasing complexity of function. Suppose we wanted to build a 4-bit adder circuit using a ROM. We’d require a ROM with 8 address lines (two 4-bit numbers to be added together), plus 4 data lines (for the signed output): With 256 addressable memory locations in this ROM chip, we would have a fair amount of programming to do, telling it what binary output to generate for each and every combination of binary inputs. We would also run the risk of making a mistake in our programming and have it output an incorrect sum, if we weren’t careful. However, the flexibility of being able to configure this function (or any function) through software alone generally outweighs that costs. Consider some of the advanced functions we could implement with the above “adder.” We know that when we add two sets of numbers in 2’s complement signed notation, we risk having the answer overflow. For instance, if we try to add 0111 (decimal 7) to 0110 (decimal 6) with only a 4-bit number field, the answer we’ll get is 1001 (decimal -7) instead of the correct value, 13 (7 + 6), which cannot be expressed using 4 signed bits. If we wanted to, we could avoid the strange answers given in overflow conditions by programming this look-up table circuit to output something else in conditions where we know overflow will occur (that is, in any case where the real sum would exceed +7 or -8). One alternative might be to program the ROM to output the quantity 0111 (the maximum positive value that can be represented with 4 signed bits), or any other value that we determined to be more appropriate for the application than the typical overflowed “error” value that a regular adder circuit would output. It’s all up to the programmer to decide what he or she wants this circuit to do, because we are no longer limited by the constraints of logic gate functions. The possibilities don’t stop at customized logic functions, either. By adding more address lines to the 256 x 4 ROM chip, we can expand the look-up table to include multiple functions: With two more address lines, the ROM chip will have 4 times as many addresses as before (1024 instead of 256). This ROM could be programmed so that when A8 and A9 were both low, the output data represented the sum of the two 4-bit binary numbers input on address lines A0 through A7, just as we had with the previous 256 x 4 ROM circuit. For the addresses A8=1 and A9=0, it could be programmed to output the difference (subtraction) between the first 4-bit binary number (A0 through A3) and the second binary number (A4 through A7). For the addresses A8=0 and A9=1, we could program the ROM to output the difference (subtraction) of the two numbers in reverse order (second - first rather than first - second), and finally, for the addresses A8=1 and A9=1, the ROM could be programmed to compare the two inputs and output an indication of equality or inequality. What we will have then is a device that can perform four different arithmetical operations on 4-bit binary numbers, all by “looking up” the answers programmed into it. If we had used a ROM chip with more than two additional address lines, we could program it with a wider variety of functions to perform on the two 4-bit inputs. There are a number of operations peculiar to binary data (such as parity check or Exclusive-ORing of bits) that we might find useful to have programmed in such a look-up table. Devices such as this, which can perform a variety of arithmetical tasks as dictated by a binary input code, are known as Arithmetic Logic Units (ALUs), and they comprise one of the essential components of computer technology. Although modern ALUs are more often constructed from very complex combinational logic (gate) circuits for reasons of speed, it should be comforting to know that the exact same functionality may be duplicated with a “dumb” ROM chip programmed with the appropriate look-up table(s). In fact, this exact approach was used by IBM engineers in 1959 with the development of the IBM 1401 and 1620 computers, which used look-up tables to perform addition, rather than binary adder circuitry. The machine was fondly known as the “CADET,” which stood for “Can’t Add, Doesn’t Even Try.” A very common application for look-up table ROMs is in control systems where a custom mathematical function needs to be represented. Such an application is found in computer-controlled fuel injection systems for automobile engines, where the proper air/fuel mixture ratio for efficient and clean operation changes with several environmental and operational variables. Tests performed on engines in research laboratories determine what these ideal ratios are for varying conditions of engine load, ambient air temperature, and barometric air pressure. The variables are measured with sensor transducers, their analog outputs converted to digital signals with A/D circuitry, and those parallel digital signals used as address inputs to a high-capacity ROM chip programmed to output the optimum digital value for air/fuel ratio for any of these given conditions. Sometimes, ROMs are used to provide one-dimensional look-up table functions, for “correcting” digitized signal values so that they more accurately represent their real-world significance. An example of such a device is a thermocouple transmitter, which measures the millivoltage signal generated by a junction of dissimilar metals and outputs a signal which is supposed to directly correspond to that junction temperature. Unfortunately, thermocouple junctions do not have perfectly linear temperature/voltage responses, and so the raw voltage signal is not perfectly proportional to temperature. By digitizing the voltage signal (A/D conversion) and sending that digital value to the address of a ROM programmed with the necessary correction values, the ROM’s programming could eliminate some of the nonlinearity of the thermocouple’s temperature-to-millivoltage relationship, so that the final output of the device would be more accurate. The popular instrumentation term for such a look-up table is a digital characterizer. Another application for look-up tables is in special code translation. A 128 x 8 ROM, for instance, could be used to translate 7-bit ASCII code to 8-bit EBCDIC code: Again, all that is required is for the ROM chip to be properly programmed with the necessary data so that each valid ASCII input will produce a corresponding EBCDIC output code.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/16%3A_Principles_Of_Digital_Computing/16.01%3A_A_Binary_Adder.txt
Feedback is a fascinating engineering principle. It can turn a rather simple device or process into something substantially more complex. We’ve seen the effects of feedback intentionally integrated into circuit designs with some rather astounding effects: • Comparator + negative feedback—————-> controllable-gain amplifier • Comparator + positive feedback—————-> comparator with hysteresis • Combinational logic + positive feedback—> multivibrator In the field of process instrumentation, feedback is used to transform a simple measurement system into something capable of control: • Measurement system + negative feedback—-> closed-loop control system Feedback, both positive and negative, has the tendency to add whole new dynamics to the operation of a device or system. Sometimes, these new dynamics find useful application, while other times they are merely interesting. With look-up tables programmed into memory devices, feedback from the data outputs back to the address inputs creates a whole new type of device: the Finite State Machine, or FSM: The above circuit illustrates the basic idea: the data stored at each address becomes the next storage location that the ROM gets addressed to. The result is a specific sequence of binary numbers (following the sequence programmed into the ROM) at the output, over time. To avoid signal timing problems, though, we need to connect the data outputs back to the address inputs through a 4-bit D-type flip-flop, so that the sequence takes place step by step to the beat of a controlled clock pulse: An analogy for the workings of such a device might be an array of post-office boxes, each one with an identifying number on the door (the address), and each one containing a piece of paper with the address of another P.O. box written on it (the data). A person, opening the first P.O. box, would find in it the address of the next P.O. box to open. By storing a particular pattern of addresses in the P.O. boxes, we can dictate the sequence in which each box gets opened, and therefore the sequence of which paper gets read. Having 16 addressable memory locations in the ROM, this Finite State Machine would have 16 different stable “states” in which it could latch. In each of those states, the identity of the next state would be programmed in to the ROM, awaiting the signal of the next clock pulse to be fed back to the ROM as an address. One useful application of such an FSM would be to generate an arbitrary count sequence, such as Gray Code: Try to follow the Gray Code count sequence as the FSM would do it: starting at 0000, follow the data stored at that address (0001) to the next address, and so on (0011), and so on (0010), and so on (0110), etc. The result, for the program table shown, is that the sequence of addressing jumps around from address to address in what looks like a haphazard fashion, but when you check each address that is accessed, you will find that it follows the correct order for 4-bit Gray code. When the FSM arrives at its last programmed state (address 1000), the data stored there is 0000, which starts the whole sequence over again at address 0000 in step with the next clock pulse. We could expand on the capabilities of the above circuit by using a ROM with more address lines, and adding more programming data: Now, just like the look-up table adder circuit that we turned into an Arithmetic Logic Unit (+, -, x, / functions) by utilizing more address lines as “function control” inputs, this FSM counter can be used to generate more than one count sequence, a different sequence programmed for the four feedback bits (A0 through A3) for each of the two function control line input combinations (A4 = 0 or 1). If A4 is 0, the FSM counts in binary; if A4 is 1, the FSM counts in Gray Code. In either case, the counting sequence is arbitrary: determined by the whim of the programmer. For that matter, the counting sequence doesn’t even have to have 16 steps, as the programmer may decide to have the sequence recycle to 0000 at any one of the steps at all. It is a completely flexible counting device, the behavior strictly determined by the software (programming) in the ROM. We can expand on the capabilities of the FSM even more by utilizing a ROM chip with additional address input and data output lines. Take the following circuit, for example: Here, the D0 through D3 data outputs are used exclusively for feedback to the A0 through A3 address lines. Date output lines D4 through D7 can be programmed to output something other than the FSM’s “state” value. Being that four data output bits are being fed back to four address bits, this is still a 16-state device. However, having the output data come from other data output lines gives the programmer more freedom to configure functions than before. In other words, this device can do far more than just count! The programmed output of this FSM is dependent not only upon the state of the feedback address lines (A0 through A3), but also the states of the input lines (A4 through A7). The D-type flip/flop’s clock signal input does not have to come from a pulse generator, either. To make things more interesting, the flip/flop could be wired up to clock on some external event, so that the FSM goes to the next state only when an input signal tells it to. Now we have a device that better fulfills the meaning of the word “programmable.” The data written to the ROM is a program in the truest sense: the outputs follow a pre-established order based on the inputs to the device and which “step” the device is on in its sequence. This is very close to the operating design of the Turing Machine, a theoretical computing device invented by Alan Turing, mathematically proven to be able to solve any known arithmetic problem, given enough memory capacity.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/16%3A_Principles_Of_Digital_Computing/16.03%3A_Finite-state_Machine.txt
Early computer science pioneers such as Alan Turing and John Von Neumann postulated that for a computing device to be really useful, it not only had to be able to generate specific outputs as dictated by programmed instructions, but it also had to be able to write data to memory, and be able to act on that data later. Both the program steps and the processed data were to reside in a common memory “pool,” thus giving way to the label of the stored-program computer. Turing’s theoretical machine utilized a sequential-access tape, which would store data for a control circuit to read, the control circuit re-writing data to the tape and/or moving the tape to a new position to read more data. Modern computers use random-access memory devices instead of sequential-access tapes to accomplish essentially the same thing, except with greater capability. A helpful illustration is that of early automatic machine tool control technology. Called open-loop, or sometimes just NC (numerical control), these control systems would direct the motion of a machine tool such as a lathe or a mill by following instructions programmed as holes in paper tape. The tape would be run one direction through a “read” mechanism, and the machine would blindly follow the instructions on the tape without regard to any other conditions. While these devices eliminated the burden of having to have a human machinist direct every motion of the machine tool, it was limited in usefulness. Because the machine was blind to the real world, only following the instructions written on the tape, it could not compensate for changing conditions such as expansion of the metal or wear of the mechanisms. Also, the tape programmer had to be acutely aware of the sequence of previous instructions in the machine’s program to avoid troublesome circumstances (such as telling the machine tool to move the drill bit laterally while it is still inserted into a hole in the work), since the device had no memory other than the tape itself, which was read-only. Upgrading from a simple tape reader to a Finite State control design gave the device a sort of memory that could be used to keep track of what it had already done (through feedback of some of the data bits to the address bits), so at least the programmer could decide to have the circuit remember “states” that the machine tool could be in (such as “coolant on,” or tool position). However, there was still room for improvement. The ultimate approach is to have the program give instructions which would include the writing of new data to a read/write (RAM) memory, which the program could easily recall and process. This way, the control system could record what it had done, and any sensor-detectable process changes, much in the same way that a human machinist might jot down notes or measurements on a scratch-pad for future reference in his or her work. This is what is referred to as CNC, or Closed-loop Numerical Control. Engineers and computer scientists looked forward to the possibility of building digital devices that could modify their own programming, much the same as the human brain adapts the strength of inter-neural connections depending on environmental experiences (that is why memory retention improves with repeated study, and behavior is modified through consequential feedback). Only if the computer’s program were stored in the same writable memory “pool” as the data would this be practical. It is interesting to note that the notion of a self-modifying program is still considered to be on the cutting edge of computer science. Most computer programming relies on rather fixed sequences of instructions, with a separate field of data being the only information that gets altered. To facilitate the stored-program approach, we require a device that is much more complex than the simple FSM, although many of the same principles apply. First, we need read/write memory that can be easily accessed: this is easy enough to do. Static or dynamic RAM chips do the job well, and are inexpensive. Secondly, we need some form of logic to process the data stored in memory. Because standard and Boolean arithmetic functions are so useful, we can use an Arithmetic Logic Unit (ALU) such as the look-up table ROM example explored earlier. Finally, we need a device that controls how and where data flows between the memory, the ALU, and the outside world. This so-called Control Unit is the most mysterious piece of the puzzle yet, being comprised of tri-state buffers (to direct data to and from buses) and decoding logic which interprets certain binary codes as instructions to carry out. Sample instructions might be something like: “add the number stored at memory address 0010 with the number stored at memory address 1101,” or, “determine the parity of the data in memory address 0111.” The choice of which binary codes represent which instructions for the Control Unit to decode is largely arbitrary, just as the choice of which binary codes to use in representing the letters of the alphabet in the ASCII standard was largely arbitrary. ASCII, however, is now an internationally recognized standard, whereas control unit instruction codes are almost always manufacturer-specific. Putting these components together (read/write memory, ALU, and control unit) results in a digital device that is typically called a processor. If minimal memory is used, and all the necessary components are contained on a single integrated circuit, it is called a microprocessor. When combined with the necessary bus-control support circuitry, it is known as a Central Processing Unit, or CPU. CPU operation is summed up in the so-called fetch/execute cycle. Fetch means to read an instruction from memory for the Control Unit to decode. A small binary counter in the CPU (known as the program counter or instruction pointer) holds the address value where the next instruction is stored in main memory. The Control Unit sends this binary address value to the main memory’s address lines, and the memory’s data output is read by the Control Unit to send to another holding register. If the fetched instruction requires reading more data from memory (for example, in adding two numbers together, we have to read both the numbers that are to be added from main memory or from some other source), the Control Unit appropriately addresses the location of the requested data and directs the data output to ALU registers. Next, the Control Unit would execute the instruction by signaling the ALU to do whatever was requested with the two numbers, and direct the result to another register called the accumulator. The instruction has now been “fetched” and “executed,” so the Control Unit now increments the program counter to step the next instruction, and the cycle repeats itself. As one might guess, carrying out even simple instructions is a tedious process. Several steps are necessary for the Control Unit to complete the simplest of mathematical procedures. This is especially true for arithmetic procedures such as exponents, which involve repeated executions (“iterations”) of simpler functions. Just imagine the sheer quantity of steps necessary within the CPU to update the bits of information for the graphic display on a flight simulator game! The only thing which makes such a tedious process practical is the fact that microprocessor circuits are able to repeat the fetch/execute cycle with great speed. In some microprocessor designs, there are minimal programs stored within a special ROM memory internal to the device (called microcode) which handle all the sub-steps necessary to carry out more complex math operations. This way, only a single instruction has to be read from the program RAM to do the task, and the programmer doesn’t have to deal with trying to tell the microprocessor how to do every minute step. In essence, its a processor inside of a processor; a program running inside of a program.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/16%3A_Principles_Of_Digital_Computing/16.04%3A_Microprocessors.txt
The “vocabulary” of instructions which any particular microprocessor chip possesses is specific to that model of chip. An Intel 80386, for example, uses a completely different set of binary codes than a Motorola 68020, for designating equivalent functions. Unfortunately, there are no standards in place for microprocessor instructions. This makes programming at the very lowest level very confusing and specialized. When a human programmer develops a set of instructions to directly tell a microprocessor how to do something (like automatically control the fuel injection rate to an engine), they’re programming in the CPU’s own “language.” This language, which consists of the very same binary codes which the Control Unit inside the CPU chip decodes to perform tasks, is often referred to as machine language. While machine language software can be “worded” in binary notation, it is often written in hexadecimal form, because it is easier for human beings to work with. For example, I’ll present just a few of the common instruction codes for the Intel 8080 micro-processor chip: Even with hexadecimal notation, these instructions can be easily confused and forgotten. For this purpose, another aid for programmers exists called assembly language. With assembly language, two to four letter mnemonic words are used in place of the actual hex or binary code for describing program steps. For example, the instruction 7B for the Intel 8080 would be “MOV A,E” in assembly language. The mnemonics, of course, are useless to the microprocessor, which can only understand binary codes, but it is an expedient way for programmers to manage the writing of their programs on paper or text editor (word processor). There are even programs written for computers called assemblers which understand these mnemonics, translating them to the appropriate binary codes for a specified target microprocessor, so that the programmer can write a program in the computer’s native language without ever having to deal with strange hex or tedious binary code notation. Once a program is developed by a person, it must be written into memory before a microprocessor can execute it. If the program is to be stored in ROM (which some are), this can be done with a special machine called a ROM programmer, or (if you’re masochistic), by plugging the ROM chip into a breadboard, powering it up with the appropriate voltages, and writing data by making the right wire connections to the address and data lines, one at a time, for each instruction. If the program is to be stored in volatile memory, such as the operating computer’s RAM memory, there may be a way to type it in by hand through that computer’s keyboard (some computers have a mini-program stored in ROM which tells the microprocessor how to accept keystrokes from a keyboard and store them as commands in RAM), even if it is too dumb to do anything else. Many “hobby” computer kits work like this. If the computer to be programmed is a fully-functional personal computer with an operating system, disk drives, and the whole works, you can simply command the assembler to store your finished program onto a disk for later retrieval. To “run” your program, you would simply type your program’s filename at the prompt, press the Enter key, and the microprocessor’s Program Counter register would be set to point to the location (“address”) on the disk where the first instruction is stored, and your program would run from there. Although programming in machine language or assembly language makes for fast and highly efficient programs, it takes a lot of time and skill to do so for anything but the simplest tasks, because each machine language instruction is so crude. The answer to this is to develop ways for programmers to write in “high level” languages, which can more efficiently express human thought. Instead of typing in dozens of cryptic assembly language codes, a programmer writing in a high-level language would be able to write something like this . . and expect the computer to print “Hello, world!” with no further instruction on how to do so. This is a great idea, but how does a microprocessor understand such “human” thinking when its vocabulary is so limited? The answer comes in two different forms: interpretation, or compilation. Just like two people speaking different languages, there has to be some way to transcend the language barrier in order for them to converse. A translator is needed to translate each person’s words to the other person’s language, one way at a time. For the microprocessor, this means another program, written by another programmer in machine language, which recognizes the ASCII character patterns of high-level commands such as Print (P-r-i-n-t) and can translate them into the necessary bite-size steps that the microprocessor can directly understand. If this translation is done during program execution, just like a translator intervening between two people in a live conversation, it is called “interpretation.” On the other hand, if the entire program is translated to machine language in one fell swoop, like a translator recording a monologue on paper and then translating all the words at one sitting into a written document in the other language, the process is called “compilation.” Interpretation is simple, but makes for a slow-running program because the microprocessor has to continually translate the program between steps, and that takes time. Compilation takes time initially to translate the whole program into machine code, but the resulting machine code needs no translation after that and runs faster as a consequence. Programming languages such as BASIC and FORTH are interpreted. Languages such as C, C++, FORTRAN, and PASCAL are compiled. Compiled languages are generally considered to be the languages of choice for professional programmers, because of the efficiency of the final product. Naturally, because machine language vocabularies vary widely from microprocessor to microprocessor, and since high-level languages are designed to be as universal as possible, the interpreting and compiling programs necessary for language translation must be microprocessor-specific. Development of these interpreters and compilers is a most impressive feat: the people who make these programs most definitely earn their keep, especially when you consider the work they must do to keep their software product current with the rapidly-changing microprocessor models appearing on the market! To mitigate this difficulty, the trend-setting manufacturers of microprocessor chips (most notably, Intel and Motorola) try to design their new products to be backwardly compatible with their older products. For example, the entire instruction set for the Intel 80386 chip is contained within the latest Pentium IV chips, although the Pentium chips have additional instructions that the 80386 chips lack. What this means is that machine-language programs (compilers, too) written for 80386 computers will run on the latest and greatest Intel Pentium IV CPU, but machine-language programs written specifically to take advantage of the Pentium’s larger instruction set will not run on an 80386, because the older CPU simply doesn’t have some of those instructions in its vocabulary: the Control Unit inside the 80386 cannot decode them. Building on this theme, most compilers have settings that allow the programmer to select which CPU type he or she wants to compile machine-language code for. If they select the 80386 setting, the compiler will perform the translation using only instructions known to the 80386 chip; if they select the Pentium setting, the compiler is free to make use of all instructions known to Pentiums. This is analogous to telling a translator what minimum reading level their audience will be: a document translated for a child will be understandable to an adult, but a document translated for an adult may very well be gibberish to a child.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/16%3A_Principles_Of_Digital_Computing/16.05%3A_Microprocessor_Programming.txt
• 1.1: Static Electricity • 1.2: Conductors, Insulators, and Electron Flow The electrons of different types of atoms have different degrees of freedom to move around. With some types of materials, such as metals, the outermost electrons in the atoms are so loosely bound that they chaotically move in the space between the atoms of that material by nothing more than the influence of room-temperature heat energy. Because these virtually unbound electrons are free to leave their respective atoms and float around in the space between adjacent atoms, they are often called fr • 1.3: What Are Electric Circuits? You might have been wondering how electrons can continuously flow in a uniform direction through wires without the benefit of these hypothetical electron Sources and Destinations. In order for the Source-and-Destination scheme to work, both would have to have an infinite capacity for electrons in order to sustain a continuous flow! Using the marble-and-tube analogy from the previous page on conductors, insulators, and electron flow, the marble source and marble destination buckets would have to • 1.4: Voltage and Current As was previously mentioned, we need more than just a continuous path (circuit) before a continuous flow of electrons will occur: we also need some means to push these electrons around the circuit. Just like marbles in a tube or water in a pipe, it takes some kind of influencing force to initiate flow. With electrons, this force is the same force at work in static electricity: the force produced by an imbalance of electric charge. • 1.5: Resistance The circuit in the previous section is not a very practical one. In fact, it can be quite dangerous to build (directly connecting the poles of a voltage source together with a single piece of wire). The reason it is dangerous is because the magnitude of electric current may be very large in such a short circuit, and the release of energy very dramatic (usually in the form of heat). Usually, electric circuits are constructed in such a way as to make practical use of that released energy, in as sa • 1.6: Voltage and Current in a Practical Circuit • 1.7: Conventional Versus Electron Flow 01: Basic Concepts of Electricity It was discovered centuries ago that certain types of materials would mysteriously attract one another after being rubbed together. For example: after rubbing a piece of silk against a piece of glass, the silk and glass would tend to stick together. Indeed, there was an attractive force that could be demonstrated even when the two materials were separated: Glass and silk aren’t the only materials known to behave like this. Anyone who has ever brushed up against a latex balloon only to find that it tries to stick to them has experienced this same phenomenon. Paraffin wax and wool cloth are another pair of materials early experimenters recognized as manifesting attractive forces after being rubbed together: This phenomenon became even more interesting when it was discovered that identical materials, after having been rubbed with their respective cloths, always repelled each other: It was also noted that when a piece of glass rubbed with silk was exposed to a piece of wax rubbed with wool, the two materials would attract one another: Furthermore, it was found that any material demonstrating properties of attraction or repulsion after being rubbed could be classed into one of two distinct categories: attracted to glass and repelled by wax, or repelled by glass and attracted to wax. It was either one or the other: there were no materials found that would be attracted to or repelled by both glass and wax, or that reacted to one without reacting to the other. More attention was directed toward the pieces of cloth used to do the rubbing. It was discovered that after rubbing two pieces of glass with two pieces of silk cloth, not only did the glass pieces repel each other, but so did the cloths. The same phenomenon held for the pieces of wool used to rub the wax: Now, this was really strange to witness. After all, none of these objects were visibly altered by the rubbing, yet they definitely behaved differently than before they were rubbed. Whatever change took place to make these materials attract or repel one another was invisible. Some experimenters speculated that invisible “fluids” were being transferred from one object to another during the process of rubbing, and that these “fluids” were able to effect a physical force over a distance. Charles Dufay was one of the early experimenters who demonstrated that there were definitely two different types of changes wrought by rubbing certain pairs of objects together. The fact that there was more than one type of change manifested in these materials was evident by the fact that there were two types of forces produced: attraction and repulsion. The hypothetical fluid transfer became known as a charge. One pioneering researcher, Benjamin Franklin, came to the conclusion that there was only one fluid exchanged between rubbed objects, and that the two different “charges” were nothing more than either an excess or a deficiency of that one fluid. After experimenting with wax and wool, Franklin suggested that the coarse wool removed some of this invisible fluid from the smooth wax, causing an excess of fluid on the wool and a deficiency of fluid on the wax. The resulting disparity in fluid content between the wool and wax would then cause an attractive force, as the fluid tried to regain its former balance between the two materials. Postulating the existence of a single “fluid” that was either gained or lost through rubbing accounted best for the observed behavior: that all these materials fell neatly into one of two categories when rubbed, and most importantly, that the two active materials rubbed against each other always fell into opposing categories as evidenced by their invariable attraction to one another. In other words, there was never a time where two materials rubbed against each other both became either positive or negative. Following Franklin’s speculation of the wool rubbing something off of the wax, the type of charge that was associated with rubbed wax became known as “negative” (because it was supposed to have a deficiency of fluid) while the type of charge associated with the rubbing wool became known as “positive” (because it was supposed to have an excess of fluid). Little did he know that his innocent conjecture would cause much confusion for students of electricity in the future! Precise measurements of electrical charge were carried out by the French physicist Charles Coulomb in the 1780’s using a device called a torsional balance measuring the force generated between two electrically charged objects. The results of Coulomb’s work led to the development of a unit of electrical charge named in his honor, the coulomb. If two “point” objects (hypothetical objects having no appreciable surface area) were equally charged to a measure of 1 coulomb, and placed 1 meter (approximately 1 yard) apart, they would generate a force of about 9 billion newtons (approximately 2 billion pounds), either attracting or repelling depending on the types of charges involved. The operational definition of a coulomb as the unit of electrical charge (in terms of force generated between point charges) was found to be equal to an excess or deficiency of about 6,250,000,000,000,000,000 electrons. Or, stated in reverse terms, one electron has a charge of about 0.00000000000000000016 coulombs. Being that one electron is the smallest known carrier of electric charge, this last figure of charge for the electron is defined as the elementary charge. It was discovered much later that this “fluid” was actually composed of extremely small bits of matter called electrons, so named in honor of the ancient Greek word for amber: another material exhibiting charged properties when rubbed with cloth. Experimentation has since revealed that all objects are composed of extremely small “building-blocks” known as atoms, and that these atoms are in turn composed of smaller components known as particles. The three fundamental particles comprising most atoms are called protons, neutrons and electrons. Whilst the majority of atoms have a combination of protons, neutrons, and electrons, not all atoms have neutrons; an example is the protium isotope (1H1) of hydrogen (Hydrogen-1) which is the lightest and most common form of hydrogen which only has one proton and one electron. Atoms are far too small to be seen, but if we could look at one, it might appear something like this: Even though each atom in a piece of material tends to hold together as a unit, there’s actually a lot of empty space between the electrons and the cluster of protons and neutrons residing in the middle. This crude model is that of the element carbon, with six protons, six neutrons, and six electrons. In any atom, the protons and neutrons are very tightly bound together, which is an important quality. The tightly-bound clump of protons and neutrons in the center of the atom is called the nucleus, and the number of protons in an atom’s nucleus determines its elemental identity: change the number of protons in an atom’s nucleus, and you change the type of atom that it is. In fact, if you could remove three protons from the nucleus of an atom of lead, you will have achieved the old alchemists’ dream of producing an atom of gold! The tight binding of protons in the nucleus is responsible for the stable identity of chemical elements, and the failure of alchemists to achieve their dream. Neutrons are much less influential on the chemical character and identity of an atom than protons, although they are just as hard to add to or remove from the nucleus, being so tightly bound. If neutrons are added or gained, the atom will still retain the same chemical identity, but its mass will change slightly and it may acquire strange nuclear properties such as radioactivity. However, electrons have significantly more freedom to move around in an atom than either protons or neutrons. In fact, they can be knocked out of their respective positions (even leaving the atom entirely!) by far less energy than what it takes to dislodge particles in the nucleus. If this happens, the atom still retains its chemical identity, but an important imbalance occurs. Electrons and protons are unique in the fact that they are attracted to one another over a distance. It is this attraction over distance which causes the attraction between rubbed objects, where electrons are moved away from their original atoms to reside around atoms of another object. Electrons tend to repel other electrons over a distance, as do protons with other protons. The only reason protons bind together in the nucleus of an atom is because of a much stronger force called the strong nuclear force which has effect only under very short distances. Because of this attraction/repulsion behavior between individual particles, electrons and protons are said to have opposite electric charges. That is, each electron has a negative charge, and each proton a positive charge. In equal numbers within an atom, they counteract each other’s presence so that the net charge within the atom is zero. This is why the picture of a carbon atom had six electrons: to balance out the electric charge of the six protons in the nucleus. If electrons leave or extra electrons arrive, the atom’s net electric charge will be imbalanced, leaving the atom “charged” as a whole, causing it to interact with charged particles and other charged atoms nearby. Neutrons are neither attracted to or repelled by electrons, protons, or even other neutrons, and are consequently categorized as having no charge at all. The process of electrons arriving or leaving is exactly what happens when certain combinations of materials are rubbed together: electrons from the atoms of one material are forced by the rubbing to leave their respective atoms and transfer over to the atoms of the other material. In other words, electrons comprise the “fluid” hypothesized by Benjamin Franklin. The result of an imbalance of this “fluid” (electrons) between objects is called static electricity. It is called “static” because the displaced electrons tend to remain stationary after being moved from one insulating material to another. In the case of wax and wool, it was determined through further experimentation that electrons in the wool actually transferred to the atoms in the wax, which is exactly opposite of Franklin’s conjecture! In honor of Franklin’s designation of the wax’s charge being “negative” and the wool’s charge being “positive,” electrons are said to have a “negative” charging influence. Thus, an object whose atoms have received a surplus of electrons is said to be negatively charged, while an object whose atoms are lacking electrons is said to be positively charged, as confusing as these designations may seem. By the time the true nature of electric “fluid” was discovered, Franklin’s nomenclature of electric charge was too well established to be easily changed, and so it remains to this day. Michael Faraday proved (1832) that static electricity was the same as that produced by a battery or a generator. Static electricity is, for the most part, a nuisance. Black powder and smokeless powder have graphite added to prevent ignition due to static electricity. It causes damage to sensitive semiconductor circuitry. While it is possible to produce motors powered by high voltage and low current characteristic of static electricity, this is not economic. The few practical applications of static electricity include xerographic printing, the electrostatic air filter, and the high voltage Van de Graaff generator. Review • All materials are made up of tiny “building blocks” known as atoms. • All naturally occurring atoms contain particles called electrons, protons, and neutrons, with the exception of the protium isotope (1H1) of hydrogen. • Electrons have a negative (-) electric charge. • Protons have a positive (+) electric charge. • Neutrons have no electric charge. • Electrons can be dislodged from atoms much easier than protons or neutrons. • The number of protons in an atom’s nucleus determines its identity as a unique element.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/01%3A_Basic_Concepts_of_Electricity/1.01%3A_Static_Electricity.txt
Conductors vs Insulators In other types of materials such as glass, the atoms’ electrons have very little freedom to move around. While external forces such as physical rubbing can force some of these electrons to leave their respective atoms and transfer to the atoms of another material, they do not move between atoms within that material very easily. This relative mobility of electrons within a material is known as electric conductivity. Conductivity is determined by the types of atoms in a material (the number of protons in each atom’s nucleus, determining its chemical identity) and how the atoms are linked together with one another. Materials with high electron mobility (many free electrons) are called conductors, while materials with low electron mobility (few or no free electrons) are called insulators. Here are a few common examples of conductors and insulators: It must be understood that not all conductive materials have the same level of conductivity, and not all insulators are equally resistant to electron motion. Electrical conductivity is analogous to the transparency of certain materials to light: materials that easily “conduct” light are called “transparent,” while those that don’t are called “opaque.” However, not all transparent materials are equally conductive to light. Window glass is better than most plastics, and certainly better than “clear” fiberglass. So it is with electrical conductors, some being better than others. For instance, silver is the best conductor in the “conductors” list, offering easier passage for electrons than any other material cited. Dirty water and concrete are also listed as conductors, but these materials are substantially less conductive than any metal. It should also be understood that some materials experience changes in their electrical properties under different conditions. Glass, for instance, is a very good insulator at room temperature but becomes a conductor when heated to a very high temperature. Gases such as air, normally insulating materials, also become conductive if heated to very high temperatures. Most metals become poorer conductors when heated, and better conductors when cooled. Many conductive materials become perfectly conductive (this is called superconductivity) at extremely low temperatures. Electron Flow / Electric Current While the normal motion of “free” electrons in a conductor is random, with no particular direction or speed, electrons can be influenced to move in a coordinated fashion through a conductive material. This uniform motion of electrons is what we call electricity or electric current. To be more precise, it could be called dynamic electricity in contrast to static electricity, which is an unmoving accumulation of electric charge. Just like water flowing through the emptiness of a pipe, electrons are able to move within the empty space within and between the atoms of a conductor. The conductor may appear to be solid to our eyes, but any material composed of atoms is mostly empty space! The liquid-flow analogy is so fitting that the motion of electrons through a conductor is often referred to as a “flow.” A noteworthy observation may be made here. As each electron moves uniformly through a conductor, it pushes on the one ahead of it, such that all the electrons move together as a group. The starting and stopping of electron flow through the length of a conductive path is virtually instantaneous from one end of a conductor to the other, even though the motion of each electron may be very slow. An approximate analogy is that of a tube filled end-to-end with marbles: The tube is full of marbles, just as a conductor is full of free electrons ready to be moved by an outside influence. If a single marble is suddenly inserted into this full tube on the left-hand side, another marble will immediately try to exit the tube on the right. Even though each marble only traveled a short distance, the transfer of motion through the tube is virtually instantaneous from the left end to the right end, no matter how long the tube is. With electricity, the overall effect from one end of a conductor to the other happens at the speed of light: a swift 186,000 miles per second!!! Each individual electron, though, travels through the conductor at a much slower pace Electron Flow Through Wire If we want electrons to flow in a certain direction to a certain place, we must provide the proper path for them to move, just as a plumber must install piping to get water to flow where he or she wants it to flow. To facilitate this, wires are made of highly conductive metals such as copper or aluminum in a wide variety of sizes. Remember that electrons can flow only when they have the opportunity to move in the space between the atoms of a material. This means that there can be electric current only where there exists a continuous path of conductive material providing a conduit for electrons to travel through. In the marble analogy, marbles can flow into the left-hand side of the tube (and, consequently, through the tube) if and only if the tube is open on the right-hand side for marbles to flow out. If the tube is blocked on the right-hand side, the marbles will just “pile up” inside the tube, and marble “flow” will not occur. The same holds true for electric current: the continuous flow of electrons requires there be an unbroken path to permit that flow. Let’s look at a diagram to illustrate how this works: A thin, solid line (as shown above) is the conventional symbol for a continuous piece of wire. Since the wire is made of a conductive material, such as copper, its constituent atoms have many free electrons which can easily move through the wire. However, there will never be a continuous or uniform flow of electrons within this wire unless they have a place to come from and a place to go. Let’s add a hypothetical electron “Source” and “Destination:” Now, with the Electron Source pushing new electrons into the wire on the left-hand side, electron flow through the wire can occur (as indicated by the arrows pointing from left to right). However, the flow will be interrupted if the conductive path formed by the wire is broken: Electrical Continuity Since air is an insulating material, and an air gap separates the two pieces of wire, the once-continuous path has now been broken, and electrons cannot flow from Source to Destination. This is like cutting a water pipe in two and capping off the broken ends of the pipe: water can’t flow if there’s no exit out of the pipe. In electrical terms, we had a condition of electrical continuity when the wire was in one piece, and now that continuity is broken with the wire cut and separated. If we were to take another piece of wire leading to the Destination and simply make physical contact with the wire leading to the Source, we would once again have a continuous path for electrons to flow. The two dots in the diagram indicate physical (metal-to-metal) contact between the wire pieces: Now, we have continuity from the Source, to the newly-made connection, down, to the right, and up to the Destination. This is analogous to putting a “tee” fitting in one of the capped-off pipes and directing water through a new segment of pipe to its destination. Please take note that the broken segment of wire on the right-hand side has no electrons flowing through it, because it is no longer part of a complete path from Source to Destination. It is interesting to note that no “wear” occurs within wires due to this electric current, unlike water-carrying pipes which are eventually corroded and worn by prolonged flows. Electrons do encounter some degree of friction as they move, however, and this friction can generate heat in a conductor. This is a topic we’ll explore in much greater detail later. Review • In conductive materials, the outer electrons in each atom can easily come or go, and are called free electrons. • In insulating materials, the outer electrons are not so free to move. • All metals are electrically conductive. • Dynamic electricity, or electric current, is the uniform motion of electrons through a conductor. • Static electricity is unmoving (if on an insulator), accumulated charge formed by either an excess or deficiency of electrons in an object. It is typically formed by charge separation by contact and separation of dissimilar materials. • For electrons to flow continuously (indefinitely) through a conductor, there must be a complete, unbroken path for them to move both into and out of that conductor. 1.03: What Are Electric Circuits What Is a Circuit? The answer to this paradox is found in the concept of a circuit: a never-ending looped pathway for electrons. If we take a wire, or many wires, joined end-to-end, and loop it around so that it forms a continuous pathway, we have the means to support a uniform flow of electrons without having to resort to infinite Sources and Destinations: Each electron advancing clockwise in this circuit pushes on the one in front of it, which pushes on the one in front of it, and so on, and so on, just like a hula-hoop filled with marbles. Now, we have the capability of supporting a continuous flow of electrons indefinitely without the need for infinite electron supplies and dumps. All we need to maintain this flow is a continuous means of motivation for those electrons, which we’ll address in the next section of this chapter on voltage and current. What Does It Mean When a Circuit Is Broken? Continuity is just as important in a circuit as it is in a straight piece of wire. Just as in the example with the straight piece of wire between the electron Source and Destination, any break in this circuit will prevent electrons from flowing through it: An important principle to realize here is that it doesn’t matter where the break occurs. Any discontinuity in the circuit will prevent electron flow throughout the entire circuit. Unless there is a continuous, unbroken loop of conductive material for electrons to flow through, a sustained flow simply cannot be maintained. Review • A circuit is an unbroken loop of conductive material that allows electrons to flow through continuously without beginning or end. • If a circuit is “broken,” that means its conductive elements no longer form a complete path, and continuous electron flow cannot occur in it. • The location of a break in a circuit is irrelevant to its inability to sustain continuous electron flow. Anybreak, anywhere in a circuit prevents electron flow throughout the circuit.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/01%3A_Basic_Concepts_of_Electricity/1.02%3A_Conductors_Insulators_and_Electron_Flow.txt
If we take the examples of wax and wool which have been rubbed together, we find that the surplus of electrons in the wax (negative charge) and the deficit of electrons in the wool (positive charge) creates an imbalance of charge between them. This imbalance manifests itself as an attractive force between the two objects: If a conductive wire is placed between the charged wax and wool, electrons will flow through it, as some of the excess electrons in the wax rush through the wire to get back to the wool, filling the deficiency of electrons there: The imbalance of electrons between the atoms in the wax and the atoms in the wool creates a force between the two materials. With no path for electrons to flow from the wax to the wool, all this force can do is attract the two objects together. Now that a conductor bridges the insulating gap, however, the force will provoke electrons to flow in a uniform direction through the wire, if only momentarily, until the charge in that area neutralizes and the force between the wax and wool diminishes. The electric charge formed between these two materials by rubbing them together serves to store a certain amount of energy. This energy is not unlike the energy stored in a high reservoir of water that has been pumped from a lower-level pond: The influence of gravity on the water in the reservoir creates a force that attempts to move the water down to the lower level again. If a suitable pipe is run from the reservoir back to the pond, water will flow under the influence of gravity down from the reservoir, through the pipe: It takes energy to pump that water from the low-level pond to the high-level reservoir, and the movement of water through the piping back down to its original level constitutes a releasing of energy stored from previous pumping. If the water is pumped to an even higher level, it will take even more energy to do so, thus more energy will be stored, and more energy released if the water is allowed to flow through a pipe back down again: Electrons are not much different. If we rub wax and wool together, we “pump” electrons away from their normal “levels,” creating a condition where a force exists between the wax and wool, as the electrons seek to re-establish their former positions (and balance within their respective atoms). The force attracting electrons back to their original positions around the positive nuclei of their atoms is analogous to the force gravity exerts on water in the reservoir, trying to draw it down to its former level. Just as the pumping of water to a higher level results in energy being stored, “pumping” electrons to create an electric charge imbalance results in a certain amount of energy being stored in that imbalance. And, just as providing a way for water to flow back down from the heights of the reservoir results in a release of that stored energy, providing a way for electrons to flow back to their original “levels” results in a release of stored energy. When the electrons are poised in that static condition (just like water sitting still, high in a reservoir), the energy stored there is called potential energy, because it has the possibility (potential) of release that has not been fully realized yet. When you scuff your rubber-soled shoes against a fabric carpet on a dry day, you create an imbalance of electric charge between yourself and the carpet. The action of scuffing your feet stores energy in the form of an imbalance of electrons forced from their original locations. This charge (static electricity) is stationary, and you won’t realize that energy is being stored at all. However, once you place your hand against a metal doorknob (with lots of electron mobility to neutralize your electric charge), that stored energy will be released in the form of a sudden flow of electrons through your hand, and you will perceive it as an electric shock! This potential energy, stored in the form of an electric charge imbalance and capable of provoking electrons to flow through a conductor, can be expressed as a term called voltage, which technically is a measure of potential energy per unit charge of electrons, or something a physicist would call specific potential energy. Defined in the context of static electricity, voltage is the measure of work required to move a unit charge from one location to another, against the force which tries to keep electric charges balanced. In the context of electrical power sources, voltage is the amount of potential energy available (work to be done) per unit charge, to move electrons through a conductor. Because voltage is an expression of potential energy, representing the possibility or potential for energy release as the electrons move from one “level” to another, it is always referenced between two points. Consider the water reservoir analogy: Because of the difference in the height of the drop, there’s potential for much more energy to be released from the reservoir through the piping to location 2 than to location 1. The principle can be intuitively understood in dropping a rock: which results in a more violent impact, a rock dropped from a height of one foot, or the same rock dropped from a height of one mile? Obviously, the drop of greater height results in greater energy released (a more violent impact). We cannot assess the amount of stored energy in a water reservoir simply by measuring the volume of water any more than we can predict the severity of a falling rock’s impact simply from knowing the weight of the rock: in both cases we must also consider how farthese masses will drop from their initial height. The amount of energy released by allowing a mass to drop is relative to the distance between its starting and ending points. Likewise, the potential energy available for moving electrons from one point to another is relative to those two points. Therefore, voltage is always expressed as a quantity between two points. Interestingly enough, the analogy of a mass potentially “dropping” from one height to another is such an apt model that voltage between two points is sometimes called a voltage drop. Voltage can be generated by means other than rubbing certain types of materials against each other. Chemical reactions, radiant energy, and the influence of magnetism on conductors are a few ways in which voltage may be produced. Respective examples of these three sources of voltage are batteries, solar cells, and generators (such as the “alternator” unit under the hood of your automobile). For now, we won’t go into detail as to how each of these voltage sources works—more important is that we understand how voltage sources can be applied to create electron flow in a circuit. Let’s take the symbol for a chemical battery and build a circuit step by step: Any source of voltage, including batteries, have two points for electrical contact. In this case, we have point 1 and point 2 in the above diagram. The horizontal lines of varying length indicate that this is a battery, and they further indicate the direction which this battery’s voltage will try to push electrons through a circuit. The fact that the horizontal lines in the battery symbol appear separated (and thus unable to serve as a path for electrons to move) is no cause for concern: in real life, those horizontal lines represent metallic plates immersed in a liquid or semi-solid material that not only conducts electrons, but also generates the voltage to push them along by interacting with the plates. Notice the little “+” and “-” signs to the immediate left of the battery symbol. The negative (-) end of the battery is always the end with the shortest dash, and the positive (+) end of the battery is always the end with the longest dash. Since we have decided to call electrons “negatively” charged (thanks, Ben!), the negative end of a battery is that end which tries to push electrons out of it. Likewise, the positive end is that end which tries to attract electrons. With the “+” and “-” ends of the battery not connected to anything, there will be voltage between those two points, but there will be no flow of electrons through the battery, because there is no continuous path for the electrons to move. The same principle holds true for the water reservoir and pump analogy: without a return pipe back to the pond, stored energy in the reservoir cannot be released in the form of water flow. Once the reservoir is completely filled up, no flow can occur, no matter how much pressure the pump may generate. There needs to be a complete path (circuit) for water to flow from the pond, to the reservoir, and back to the pond in order for continuous flow to occur. We can provide such a path for the battery by connecting a piece of wire from one end of the battery to the other. Forming a circuit with a loop of wire, we will initiate a continuous flow of electrons in a clockwise direction: So long as the battery continues to produce voltage and the continuity of the electrical path isn’t broken, electrons will continue to flow in the circuit. Following the metaphor of water moving through a pipe, this continuous, uniform flow of electrons through the circuit is called a current. So long as the voltage source keeps “pushing” in the same direction, the electron flow will continue to move in the same direction in the circuit. This single-direction flow of electrons is called a Direct Current, or DC. In the second volume of this book series, electric circuits are explored where the direction of current switches back and forth: Alternating Current, or AC. But for now, we’ll just concern ourselves with DC circuits. Because electric current is composed of individual electrons flowing in unison through a conductor by moving along and pushing on the electrons ahead, just like marbles through a tube or water through a pipe, the amount of flow throughout a single circuit will be the same at any point. If we were to monitor a cross-section of the wire in a single circuit, counting the electrons flowing by, we would notice the exact same quantity per unit of time as in any other part of the circuit, regardless of conductor length or conductor diameter. If we break the circuit’s continuity at any point, the electric current will cease in the entire loop, and the full voltage produced by the battery will be manifested across the break, between the wire ends that used to be connected: Notice the “+” and “-” signs drawn at the ends of the break in the circuit, and how they correspond to the “+” and “-” signs next to the battery’s terminals. These markers indicate the direction that the voltage attempts to push electron flow, that potential direction commonly referred to as polarity. Remember that voltage is always relative between two points. Because of this fact, the polarity of a voltage drop is also relative between two points: whether a point in a circuit gets labeled with a “+” or a “-” depends on the other point to which it is referenced. Take a look at the following circuit, where each corner of the loop is marked with a number for reference: With the circuit’s continuity broken between points 2 and 3, the polarity of the voltage dropped between points 2 and 3 is “-” for point 2 and “+” for point 3. The battery’s polarity (1 “-” and 4 “+”) is trying to push electrons through the loop clockwise from 1 to 2 to 3 to 4 and back to 1 again. Now let’s see what happens if we connect points 2 and 3 back together again, but place a break in the circuit between points 3 and 4: With the break between 3 and 4, the polarity of the voltage drop between those two points is “+” for 4 and “-” for 3. Take special note of the fact that point 3’s “sign” is opposite of that in the first example, where the break was between points 2 and 3 (where point 3 was labeled “+”). It is impossible for us to say that point 3 in this circuit will always be either “+” or “-”, because polarity, like voltage itself, is not specific to a single point, but is always relative between two points! Review • Electrons can be motivated to flow through a conductor by the same force manifested in static electricity. • Voltage is the measure of specific potential energy (potential energy per unit charge) between two locations. In layman’s terms, it is the measure of “push” available to motivate electrons. • Voltage, as an expression of potential energy, is always relative between two locations, or points. Sometimes it is called a voltage “drop.” • When a voltage source is connected to a circuit, the voltage will cause a uniform flow of electrons through that circuit called a current. • In a single (one loop) circuit, the amount of current at any point is the same as the amount of current at any other point. • If a circuit containing a voltage source is broken, the full voltage of that source will appear across the points of the break. • The +/- orientation of a voltage drop is called the polarity. It is also relative between two points.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/01%3A_Basic_Concepts_of_Electricity/1.04%3A_Voltage_and_Current.txt
The Electron Flow through Filament of the Lamp One practical and popular use of electric current is for the operation of electric lighting. The simplest form of electric lamp is a tiny metal “filament” inside of a clear glass bulb, which glows white-hot (“incandesces”) with heat energy when sufficient electric current passes through it. Like the battery, it has two conductive connection points, one for electrons to enter and the other for electrons to exit. Connected to a source of voltage, an electric lamp circuit looks something like this: As the electrons work their way through the thin metal filament of the lamp, they encounter more opposition to motion than they typically would in a thick piece of wire. This opposition to electric current depends on the type of material, its cross-sectional area, and its temperature. It is technically known as resistance. (It can be said that conductors have low resistance and insulators have very high resistance.) This resistance serves to limit the amount of current through the circuit with a given amount of voltage supplied by the battery, as compared with the “short circuit” where we had nothing but a wire joining one end of the voltage source (battery) to the other. When electrons move against the opposition of resistance, “friction” is generated. Just like mechanical friction, the friction produced by electrons flowing against a resistance manifests itself in the form of heat. The concentrated resistance of a lamp’s filament results in a relatively large amount of heat energy dissipated at that filament. This heat energy is enough to cause the filament to glow white-hot, producing light, whereas the wires connecting the lamp to the battery (which have much lower resistance) hardly even get warm while conducting the same amount of current. As in the case of the short circuit, if the continuity of the circuit is broken at any point, electron flow stops throughout the entire circuit. With a lamp in place, this means that it will stop glowing: As before, with no flow of electrons, the entire potential (voltage) of the battery is available across the break, waiting for the opportunity of a connection to bridge across that break and permit electron flow again. This condition is known as an open circuit, where a break in the continuity of the circuit prevents current throughout. All it takes is a single break in continuity to “open” a circuit. Once any breaks have been connected once again and the continuity of the circuit re-established, it is known as a closed circuit. The Basis for Switching Lamps What we see here is the basis for switching lamps on and off by remote switches. Because any break in a circuit’s continuity results in current stopping throughout the entire circuit, we can use a device designed to intentionally break that continuity (called a switch), mounted at any convenient location that we can run wires to, to control the flow of electrons in the circuit: This is how a switch mounted on the wall of a house can control a lamp that is mounted down a long hallway, or even in another room, far away from the switch. The switch itself is constructed of a pair of conductive contacts (usually made of some kind of metal) forced together by a mechanical lever actuator or pushbutton. When the contacts touch each other, electrons are able to flow from one to the other and the circuit’s continuity is established; when the contacts are separated, electron flow from one to the other is prevented by the insulation of the air between, and the circuit’s continuity is broken. The Knife Switch Perhaps the best kind of switch to show for illustration of the basic principle is the “knife” switch: A knife switch is nothing more than a conductive lever, free to pivot on a hinge, coming into physical contact with one or more stationary contact points which are also conductive. The switch shown in the above illustration is constructed on a porcelain base (an excellent insulating material), using copper (an excellent conductor) for the “blade” and contact points. The handle is plastic to insulate the operator’s hand from the conductive blade of the switch when opening or closing it. Here is another type of knife switch, with two stationary contacts instead of one: The particular knife switch shown here has one “blade” but two stationary contacts, meaning that it can make or break more than one circuit. For now this is not terribly important to be aware of, just the basic concept of what a switch is and how it works. Knife switches are great for illustrating the basic principle of how a switch works, but they present distinct safety problems when used in high-power electric circuits. The exposed conductors in a knife switch make accidental contact with the circuit a distinct possibility, and any sparking that may occur between the moving blade and the stationary contact is free to ignite any nearby flammable materials. Most modern switch designs have their moving conductors and contact points sealed inside an insulating case in order to mitigate these hazards. A photograph of a few modern switch types show how the switching mechanisms are much more concealed than with the knife design: Opened and Closed Circuits In keeping with the “open” and “closed” terminology of circuits, a switch that is making contact from one connection terminal to the other (example: a knife switch with the blade fully touching the stationary contact point) provides continuity for electrons to flow through, and is called a closed switch. Conversely, a switch that is breaking continuity (example: a knife switch with the blade not touching the stationary contact point) won’t allow electrons to pass through and is called an open switch. This terminology is often confusing to the new student of electronics, because the words “open” and “closed” are commonly understood in the context of a door, where “open” is equated with free passage and “closed” with blockage. With electrical switches, these terms have opposite meaning: “open” means no flow while “closed” means free passage of electrons. Review • Resistance is the measure of opposition to electric current. • A short circuit is an electric circuit offering little or no resistance to the flow of electrons. Short circuits are dangerous with high voltage power sources because the high currents encountered can cause large amounts of heat energy to be released. • An open circuit is one where the continuity has been broken by an interruption in the path for electrons to flow. • A closed circuit is one that is complete, with good continuity throughout. • A device designed to open or close a circuit under controlled conditions is called a switch. • The terms “open” and “closed” refer to switches as well as entire circuits. An open switch is one without continuity: electrons cannot flow through it. A closed switch is one that provides a direct (low resistance) path for electrons to flow through. 1.06: Voltage and Current in a Practical Circuit Because it takes energy to force electrons to flow against the opposition of a resistance, there will be voltage manifested (or “dropped”) between any points in a circuit with resistance between them. It is important to note that although the amount of current (the quantity of electrons moving past a given point every second) is uniform in a simple circuit, the amount of voltage (potential energy per unit charge) between different sets of points in a single circuit may vary considerably: Take this circuit as an example. If we label four points in this circuit with the numbers 1, 2, 3, and 4, we will find that the amount of current conducted through the wire between points 1 and 2 is exactly the same as the amount of current conducted through the lamp (between points 2 and 3). This same quantity of current passes through the wire between points 3 and 4, and through the battery (between points 1 and 4). However, we will find the voltage appearing between any two of these points to be directly proportional to the resistance within the conductive path between those two points, given that the amount of current along any part of the circuit’s path is the same (which, for this simple circuit, it is). In a normal lamp circuit, the resistance of a lamp will be much greater than the resistance of the connecting wires, so we should expect to see a substantial amount of voltage between points 2 and 3, with very little between points 1 and 2, or between 3 and 4. The voltage between points 1 and 4, of course, will be the full amount of “force” offered by the battery, which will be only slightly greater than the voltage across the lamp (between points 2 and 3). This, again, is analogous to the water reservoir system: Between points 2 and 3, where the falling water is releasing energy at the water-wheel, there is a difference of pressure between the two points, reflecting the opposition to the flow of water through the water-wheel. From point 1 to point 2, or from point 3 to point 4, where water is flowing freely through reservoirs with little opposition, there is little or no difference of pressure (no potential energy). However, the rate of water flow in this continuous system is the same everywhere (assuming the water levels in both pond and reservoir are unchanging): through the pump, through the water-wheel, and through all the pipes. So it is with simple electric circuits: the rate of electron flow is the same at every point in the circuit, although voltages may differ between different sets of points.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/01%3A_Basic_Concepts_of_Electricity/1.05%3A_Resistance.txt
Positive and Negative Electron Charge When Benjamin Franklin made his conjecture regarding the direction of charge flow (from the smooth wax to the rough wool), he set a precedent for electrical notation that exists to this day, despite the fact that we know electrons are the constituent units of charge, and that they are displaced from the wool to the wax—not from the wax to the wool—when those two substances are rubbed together. This is why electrons are said to have a negative charge: because Franklin assumed electric charge moved in the opposite direction that it actually does, and so objects he called “negative” (representing a deficiency of charge) actually have a surplus of electrons. By the time the true direction of electron flow was discovered, the nomenclature of “positive” and “negative” had already been so well established in the scientific community that no effort was made to change it, although calling electrons “positive” would make more sense in referring to “excess” charge. You see, the terms “positive” and “negative” are human inventions, and as such have no absolute meaning beyond our own conventions of language and scientific description. Franklin could have just as easily referred to a surplus of charge as “black” and a deficiency as “white,” in which case scientists would speak of electrons having a “white” charge (assuming the same incorrect conjecture of charge position between wax and wool). Conventional Flow Notation However, because we tend to associate the word “positive” with “surplus” and “negative” with “deficiency,” the standard label for electron charge does seem backward. Because of this, many engineers decided to retain the old concept of electricity with “positive” referring to a surplus of charge, and label charge flow (current) accordingly. This became known as conventional flow notation: Electron Flow Notation Others chose to designate charge flow according to the actual motion of electrons in a circuit. This form of symbology became known as electron flow notation: In conventional flow notation, we show the motion of charge according to the (technically incorrect) labels of + and -. This way the labels make sense, but the direction of charge flow is incorrect. In electron flow notation, we follow the actual motion of electrons in the circuit, but the + and - labels seem backward. Does it matter, really, how we designate charge flow in a circuit? Not really, so long as we’re consistent in the use of our symbols. You may follow an imagined direction of current (conventional flow) or the actual (electron flow) with equal success insofar as circuit analysis is concerned. Concepts of voltage, current, resistance, continuity, and even mathematical treatments such as Ohm’s Law (chapter 2) and Kirchhoff’s Laws (chapter 6) remain just as valid with either style of notation. Conventional Flow Notation vs Electron Flow Notation You will find conventional flow notation followed by most electrical engineers, and illustrated in most engineering textbooks. Electron flow is most often seen in introductory textbooks (this one included) and in the writings of professional scientists, especially solid-state physicists who are concerned with the actual motion of electrons in substances. These preferences are cultural, in the sense that certain groups of people have found it advantageous to envision electric current motion in certain ways. Being that most analyses of electric circuits do not depend on a technically accurate depiction of charge flow, the choice between conventional flow notation and electron flow notation is arbitrary . . . almost. Polarization and Nonpolarization Many electrical devices tolerate real currents of either direction with no difference in operation. Incandescent lamps (the type utilizing a thin metal filament that glows white-hot with sufficient current), for example, produce light with equal efficiency regardless of current direction. They even function well on alternating current (AC), where the direction changes rapidly over time. Conductors and switches operate irrespective of current direction, as well. The technical term for this irrelevance of charge flow is nonpolarization. We could say then, that incandescent lamps, switches, and wires are nonpolarized components. Conversely, any device that functions differently on currents of different direction would be called a polarized device. There are many such polarized devices used in electric circuits. Most of them are made of so-called semiconductor substances, and as such aren’t examined in detail until the third volume of this book series. Like switches, lamps, and batteries, each of these devices is represented in a schematic diagram by a unique symbol. As one might guess, polarized device symbols typically contain an arrow within them, somewhere, to designate a preferred or exclusive direction of current. This is where the competing notations of conventional and electron flow really matter. Because engineers from long ago have settled on conventional flow as their “culture’s” standard notation, and because engineers are the same people who invent electrical devices and the symbols representing them, the arrows used in these devices’ symbols all point in the direction of conventional flow, not electron flow. That is to say, all of these devices’ symbols have arrow marks that point against the actual flow of electrons through them. Perhaps the best example of a polarized device is the diode. A diode is a one-way “valve” for electric current, analogous to a check valve for those familiar with plumbing and hydraulic systems. Ideally, a diode provides unimpeded flow for current in one direction (little or no resistance), but prevents flow in the other direction (infinite resistance). Its schematic symbol looks like this: Placed within a battery/lamp circuit, its operation is as such: When the diode is facing in the proper direction to permit current, the lamp glows. Otherwise, the diode blocks all electron flow just like a break in the circuit, and the lamp will not glow. If we label the circuit current using conventional flow notation, the arrow symbol of the diode makes perfect sense: the triangular arrowhead points in the direction of charge flow, from positive to negative: On the other hand, if we use electron flow notation to show the true direction of electron travel around the circuit, the diode’s arrow symbology seems backward: For this reason alone, many people choose to make conventional flow their notation of choice when drawing the direction of charge motion in a circuit. If for no other reason, the symbols associated with semiconductor components like diodes make more sense this way. However, others choose to show the true direction of electron travel so as to avoid having to tell themselves, “just remember the electrons are actually moving the other way” whenever the true direction of electron motion becomes an issue. Which One Should You Use? In this series of textbooks, I have committed to using electron flow notation. Ironically, this was not my first choice. I found it much easier when I was first learning electronics to use conventional flow notation, primarily because of the directions of semiconductor device symbol arrows. Later, when I began my first formal training in electronics, my instructor insisted on using electron flow notation in his lectures. In fact, he asked that we take our textbooks (which were illustrated using conventional flow notation) and use our pens to change the directions of all the current arrows so as to point the “correct” way! His preference was not arbitrary, though. In his 20-year career as a U.S. Navy electronics technician, he worked on a lot of vacuum-tube equipment. Before the advent of semiconductor components like transistors, devices known as vacuum tubes or electron tubes were used to amplify small electrical signals. These devices work on the phenomenon of electrons hurtling through a vacuum, their rate of flow controlled by voltages applied between metal plates and grids placed within their path, and are best understood when visualized using electron flow notation. When I graduated from that training program, I went back to my old habit of conventional flow notation, primarily for the sake of minimizing confusion with component symbols, since vacuum tubes are all but obsolete except in special applications. Collecting notes for the writing of this book, I had full intention of illustrating it using conventional flow. Years later, when I became a teacher of electronics, the curriculum for the program I was going to teach had already been established around the notation of electron flow. Oddly enough, this was due in part to the legacy of my first electronics instructor (the 20-year Navy veteran), but that’s another story entirely! Not wanting to confuse students by teaching “differently” from the other instructors, I had to overcome my habit and get used to visualizing electron flow instead of conventional. Because I wanted my book to be a useful resource for my students, I begrudgingly changed plans and illustrated it with all the arrows pointing the “correct” way. Oh well, sometimes you just can’t win! On a positive note (no pun intended), I have subsequently discovered that some students prefer electron flow notation when first learning about the behavior of semiconductive substances. Also, the habit of visualizing electrons flowing against the arrows of polarized device symbols isn’t that difficult to learn, and in the end I’ve found that I can follow the operation of a circuit equally well using either mode of notation. Still, I sometimes wonder if it would all be much easier if we went back to the source of the confusion—Ben Franklin’s errant conjecture—and fixed the problem there, calling electrons “positive” and protons “negative.”
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/01%3A_Basic_Concepts_of_Electricity/1.07%3A_Conventional_Versus_Electron_Flow.txt
• 2.1: Ohm’s Law - How Voltage, Current, and Resistance Relate The first, and perhaps most important, relationship between current, voltage, and resistance is called Ohm’s Law, discovered by Georg Simon Ohm and published in his 1827 paper, The Galvanic Circuit Investigated Mathematically. • 2.2: An Analogy for Ohm’s Law Ohm’s Law also makes intuitive sense if you apply it to the water-and-pipe analogy. If we have a water pump that exerts pressure (voltage) to push water around a “circuit” (current) through a restriction (resistance), we can model how the three variables interrelate. If the resistance to water flow stays the same and the pump pressure increases, the flow rate must also increase. • 2.3: Power in Electric Circuits Power is the measure of how much work can be done in a given amount of time. Mechanical power is commonly measured (in America) in “horsepower.” Electrical power is almost always measured in “watts,” and it can be calculated by the formula P = IE. Electrical power is a product of both voltage and current, not either one separately. • 2.4: Calculating Electric Power Power measured in watts, symbolized by the letter “W”. • 2.5: Resistors Because the relationship between voltage, current, and resistance in any circuit is so regular, we can reliably control any variable in a circuit simply by controlling the other two. Perhaps the easiest variable in any circuit to control is its resistance. This can be done by changing the material, size, and shape of its conductive components (remember how the thin metal filament of a lamp created more electrical resistance than a thick wire?). • 2.6: Nonlinear Conduction Ohm’s Law is a simple and powerful mathematical tool for helping us analyze electric circuits, but it has limitations, and we must understand these limitations in order to properly apply it to real circuits. For most conductors, resistance is a rather stable property, largely unaffected by voltage or current. For this reason we can regard the resistance of many circuit components as a constant, with voltage and current being directly related to each other. • 2.7: Circuit Wiring • 2.8: Polarity of voltage drops • 2.9: Computer Simulation of Electric Circuits Computers can be powerful tools if used properly, especially in the realms of science and engineering. Software exists for the simulation of electric circuits by computer, and these programs can be very useful in helping circuit designers test ideas before actually building real circuits, saving much time and money. 02: Ohm's Law Voltage, Current, and Resistance An electric circuit is formed when a conductive path is created to allow free electrons to continuously move. This continuous movement of free electrons through the conductors of a circuit is called a current, and it is often referred to in terms of “flow,” just like the flow of a liquid through a hollow pipe. The force motivating electrons to “flow” in a circuit is called voltage. Voltage is a specific measure of potential energy that is always relative between two points. When we speak of a certain amount of voltage being present in a circuit, we are referring to the measurement of how much potential energy exists to move electrons from one particular point in that circuit to another particular point. Without reference to two particular points, the term “voltage” has no meaning. Free electrons tend to move through conductors with some degree of friction, or opposition to motion. This opposition to motion is more properly called resistance. The amount of current in a circuit depends on the amount of voltage available to motivate the electrons, and also the amount of resistance in the circuit to oppose electron flow. Just like voltage, resistance is a quantity relative between two points. For this reason, the quantities of voltage and resistance are often stated as being “between” or “across” two points in a circuit. Units of Measurement: Volt, Amp, and Ohm To be able to make meaningful statements about these quantities in circuits, we need to be able to describe their quantities in the same way that we might quantify mass, temperature, volume, length, or any other kind of physical quantity. For mass, we might use the units of “kilogram” or “gram.” For temperature, we might use degrees Fahrenheit or degrees Celsius. Here are the standard units of measurement for electrical current, voltage, and resistance: The “symbol” given for each quantity is the standard alphabetical letter used to represent that quantity in an algebraic equation. Standardized letters like these are common in the disciplines of physics and engineering, and are internationally recognized. The “unit abbreviation” for each quantity represents the alphabetical symbol used as a shorthand notation for its particular unit of measurement. And, yes, that strange-looking “horseshoe” symbol is the capital Greek letter Ω, just a character in a foreign alphabet (apologies to any Greek readers here). Each unit of measurement is named after a famous experimenter in electricity: The amp after the Frenchman Andre M. Ampere, the volt after the Italian Alessandro Volta, and the ohm after the German Georg Simon Ohm. The mathematical symbol for each quantity is meaningful as well. The “R” for resistance and the “V” for voltage are both self-explanatory, whereas “I” for current seems a bit weird. The “I” is thought to have been meant to represent “Intensity” (of electron flow), and the other symbol for voltage, “E,” stands for “Electromotive force.” From what research I’ve been able to do, there seems to be some dispute over the meaning of “I.” The symbols “E” and “V” are interchangeable for the most part, although some texts reserve “E” to represent voltage across a source (such as a battery or generator) and “V” to represent voltage across anything else. All of these symbols are expressed using capital letters, except in cases where a quantity (especially voltage or current) is described in terms of a brief period of time (called an “instantaneous” value). For example, the voltage of a battery, which is stable over a long period of time, will be symbolized with a capital letter “E,” while the voltage peak of a lightning strike at the very instant it hits a power line would most likely be symbolized with a lower-case letter “e” (or lower-case “v”) to designate that value as being at a single moment in time. This same lower-case convention holds true for current as well, the lower-case letter “i” representing current at some instant in time. Most direct-current (DC) measurements, however, being stable over time, will be symbolized with capital letters. Coulomb and Electric Charge One foundational unit of electrical measurement, often taught at the beginning of electronics courses but used infrequently afterwards, is the unit of the coulomb, which is a measure of electric charge proportional to the number of electrons in an imbalanced state. One coulomb of charge is equal to 6,250,000,000,000,000,000 electrons. The symbol for electric charge quantity is the capital letter “Q,” with the unit of coulombs abbreviated by the capital letter “C.” It so happens that the unit for electron flow, the amp, is equal to 1 coulomb of electrons passing by a given point in a circuit in 1 second of time. Cast in these terms, current is the rate of electric charge motion through a conductor. As stated before, voltage is the measure of potential energy per unit charge available to motivate electrons from one point to another. Before we can precisely define what a “volt” is, we must understand how to measure this quantity we call “potential energy.” The general metric unit for energy of any kind is the joule, equal to the amount of work performed by a force of 1 newton exerted through a motion of 1 meter (in the same direction). In British units, this is slightly less than 3/4 pound of force exerted over a distance of 1 foot. Put in common terms, it takes about 1 joule of energy to lift a 3/4 pound weight 1 foot off the ground, or to drag something a distance of 1 foot using a parallel pulling force of 3/4 pound. Defined in these scientific terms, 1 volt is equal to 1 joule of electric potential energy per (divided by) 1 coulomb of charge. Thus, a 9 volt battery releases 9 joules of energy for every coulomb of electrons moved through a circuit. These units and symbols for electrical quantities will become very important to know as we begin to explore the relationships between them in circuits. The Ohm’s Law Equation Ohm’s principal discovery was that the amount of electric current through a metal conductor in a circuit is directly proportional to the voltage impressed across it, for any given temperature. Ohm expressed his discovery in the form of a simple equation, describing how voltage, current, and resistance interrelate: In this algebraic expression, voltage (E) is equal to current (I) multiplied by resistance (R). Using algebra techniques, we can manipulate this equation into two variations, solving for I and for R, respectively: Analyzing Simple Circuits with Ohm’s Law Let’s see how these equations might work to help us analyze simple circuits: In the above circuit, there is only one source of voltage (the battery, on the left) and only one source of resistance to current (the lamp, on the right). This makes it very easy to apply Ohm’s Law. If we know the values of any two of the three quantities (voltage, current, and resistance) in this circuit, we can use Ohm’s Law to determine the third. In this first example, we will calculate the amount of current (I) in a circuit, given values of voltage (E) and resistance (R): What is the amount of current (I) in this circuit? n this second example, we will calculate the amount of resistance (R) in a circuit, given values of voltage (E) and current (I): What is the amount of resistance (R) offered by the lamp? In the last example, we will calculate the amount of voltage supplied by a battery, given values of current (I) and resistance (R): What is the amount of voltage provided by the battery? Ohm’s Law is a very simple and useful tool for analyzing electric circuits. It is used so often in the study of electricity and electronics that it needs to be committed to memory by the serious student. For those who are not yet comfortable with algebra, there’s a trick to remembering how to solve for any one quantity, given the other two. First, arrange the letters E, I, and R in a triangle like this: If you know E and I, and wish to determine R, just eliminate R from the picture and see what’s left: If you know E and R, and wish to determine I, eliminate I and see what’s left: Lastly, if you know I and R, and wish to determine E, eliminate E and see what’s left: Eventually, you’ll have to be familiar with algebra to seriously study electricity and electronics, but this tip can make your first calculations a little easier to remember. If you are comfortable with algebra, all you need to do is commit E=IR to memory and derive the other two formulae from that when you need them! Review • Voltage is measured in volts, symbolized by the letters “E” or “V”. • Current is measured in amps, symbolized by the letter “I” or “Ω”. • Resistance is measured in ohms, symbolized by the letter “R” or “A”. • Ohm’s Law: E = IR ; I = E/R ; R = E/I
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/02%3A_Ohm's_Law/2.01%3A_Ohms_Law_-_How_Voltage_Current_and_Resistance_Relate.txt
Ohm’s Law also makes intuitive sense if you apply it to the water-and-pipe analogy. If we have a water pump that exerts pressure (voltage) to push water around a “circuit” (current) through a restriction (resistance), we can model how the three variables interrelate. If the resistance to water flow stays the same and the pump pressure increases, the flow rate must also increase. If the pressure stays the same and the resistance increases (making it more difficult for the water to flow), then the flow rate must decrease: If the flow rate were to stay the same while the resistance to flow decreased, the required pressure from the pump would necessarily decrease: As odd as it may seem, the actual mathematical relationship between pressure, flow, and resistance is actually more complex for fluids like water than it is for electrons. If you pursue further studies in physics, you will discover this for yourself. Thankfully for the electronics student, the mathematics of Ohm’s Law is very straightforward and simple. Review • With resistance steady, current follows voltage (an increase in voltage means an increase in current, and vice versa). • With voltage steady, changes in current and resistance are opposite (an increase in current means a decrease in resistance, and vice versa). • With current steady, voltage follows resistance (an increase in resistance means an increase in voltage). 2.03: Power in Electric Circuits In addition to voltage and current, there is another measure of free electron activity in a circuit: power. First, we need to understand just what power is before we analyze it in any circuits. Power is a measure of how much work can be performed in a given amount of time. Work is generally defined in terms of the lifting of a weight against the pull of gravity. The heavier the weight and/or the higher it is lifted, the more work has been done. Power is a measure of how rapidly a standard amount of work is done. For American automobiles, engine power is rated in a unit called “horsepower,” invented initially as a way for steam engine manufacturers to quantify the working ability of their machines in terms of the most common power source of their day: horses. One horsepower is defined in British units as 550 ft-lbs of work per second of time. The power of a car’s engine won’t indicate how tall of a hill it can climb or how much weight it can tow, but it will indicate how fast it can climb a specific hill or tow a specific weight. The power of a mechanical engine is a function of both the engine’s speed and its torque provided at the output shaft. Speed of an engine’s output shaft is measured in revolutions per minute, or RPM. Torque is the amount of twisting force produced by the engine, and it is usually measured in pound-feet, or lb-ft (not to be confused with foot-pounds or ft-lbs, which is the unit for work). Neither speed nor torque alone is a measure of an engine’s power. A 100 horsepower diesel tractor engine will turn relatively slowly, but provide great amounts of torque. A 100 horsepower motorcycle engine will turn very fast, but provide relatively little torque. Both will produce 100 horsepower, but at different speeds and different torques. The equation for shaft horsepower is simple: $\text{Horsepower} = \dfrac{2 \pi ST}{33,000}$ where $S$ is the shaft speed in rpm and $T$ is the shaft torque in lb-ft. Notice how there are only two variable terms on the right-hand side of the equation, $S$ and $T$. All the other terms on that side are constant: 2, pi, and 33,000 are all constants (they do not change in value). The horsepower varies only with changes in speed and torque, nothing else. We can re-write the equation to show this relationship: $\text{Horsepower} \propto ST$ where $\propto$ means "proportional to". Because the unit of the “horsepower” doesn’t coincide exactly with speed in revolutions per minute multiplied by torque in pound-feet, we can’t say that horsepower equals ST. However, they are proportionalto one another. As the mathematical product of ST changes, the value for horsepower will change by the same proportion. In electric circuits, power is a function of both voltage and current. Not surprisingly, this relationship bears striking resemblance to the “proportional” horsepower formula above: $P=IE$ In this case, however, power ($P$) is exactly equal to current ($I$) multiplied by voltage ($E$), rather than merely being proportional to IE. When using this formula, the unit of measurement for power is the watt, abbreviated with the letter “W.” It must be understood that neither voltage nor current by themselves constitute power. Rather, power is the combination of both voltage and current in a circuit. Remember that voltage is the specific work (or potential energy) per unit charge, while current is the rate at which electric charges move through a conductor. Voltage (specific work) is analogous to the work done in lifting a weight against the pull of gravity. Current (rate) is analogous to the speed at which that weight is lifted. Together as a product (multiplication), voltage (work) and current (rate) constitute power. Just as in the case of the diesel tractor engine and the motorcycle engine, a circuit with high voltage and low current may be dissipating the same amount of power as a circuit with low voltage and high current. Neither the amount of voltage alone nor the amount of current alone indicates the amount of power in an electric circuit. In an open circuit, where voltage is present between the terminals of the source and there is zero current, there is zero power dissipated, no matter how great that voltage may be. Since P=IE and I=0 and anything multiplied by zero is zero, the power dissipated in any open circuit must be zero. Likewise, if we were to have a short circuit constructed of a loop of superconducting wire (absolutely zero resistance), we could have a condition of current in the loop with zero voltage, and likewise no power would be dissipated. Since P=IE and E=0 and anything multiplied by zero is zero, the power dissipated in a superconducting loop must be zero. (We’ll be exploring the topic of superconductivity in a later chapter). Whether we measure power in the unit of “horsepower” or the unit of “watt,” we’re still talking about the same thing: how much work can be done in a given amount of time. The two units are not numerically equal, but they express the same kind of thing. In fact, European automobile manufacturers typically advertise their engine power in terms of kilowatts (kW), or thousands of watts, instead of horsepower! These two units of power are related to each other by a simple conversion formula: So, our 100 horsepower diesel and motorcycle engines could also be rated as “74570 watt” engines, or more properly, as “74.57 kilowatt” engines. In European engineering specifications, this rating would be the norm rather than the exception. Review • Power is the measure of how much work can be done in a given amount of time. • Mechanical power is commonly measured (in America) in “horsepower.” • Electrical power is almost always measured in “watts,” and it can be calculated by the formula P = IE. • Electrical power is a product of both voltage and current, not either one separately. • Horsepower and watts are merely two different units for describing the same kind of physical measurement, with 1 horsepower equaling 745.7 watts. 2.04: Calculating Electric Power Learn the Power Formula We’ve seen the formula for determining the power in an electric circuit — by multiplying the voltage in “volts” by the current in “amps” we arrive at an answer in “watts.” Let’s apply this to a circuit example: How to Use Ohm’s Law to Determine Current In the above circuit, we know we have a battery voltage of 18 volts and a lamp resistance of 3 Ω. Using Ohm’s Law to determine current, we get: $I=\frac{E}{R}=\frac{18 \mathrm{V}}{3 \Omega}=6 \mathrm{A}$ Now that we know the current, we can take that value and multiply it by the voltage to determine power: $P=I E=(6 \mathrm{A})(18 \mathrm{V})=108 \mathrm{W}$ This tells us that the lamp is dissipating (releasing) 108 watts of power, most likely in the form of both light and heat. Increasing the Battery Voltage Let’s try taking that same circuit and increasing the battery voltage to see what happens. Intuition should tell us that the circuit current will increase as the voltage increases and the lamp resistance stays the same. Likewise, the power will increase as well: Now, the battery voltage is 36 volts instead of 18 volts. The lamp is still providing 3 Ω of electrical resistance to the flow of electrons. The current is now: $I=\frac{E}{R}=\frac{36 \mathrm{V} }{3 \Omega}=12 \mathrm{A}$ This stands to reason: if I = E/R, and we double E while R stays the same, the current should double. Indeed, it has: we now have 12 amps of current instead of 6. Now, what about power? $P=I E=(12 \mathrm{A} )(36 \mathrm{V} )=432 W$ What does Increasing Battery do to Power? Notice that the power has increased just as we might have suspected, but it increased quite a bit more than the current. Why is this? Because power is a function of voltage multiplied by current, and both voltage and current doubled from their previous values, the power will increase by a factor of 2 x 2, or 4. You can check this by dividing 432 watts by 108 watts and seeing that the ratio between them is indeed 4. Using algebra again to manipulate the formulae, we can take our original power formula and modify it for applications where we don’t know both voltage and current: If we only know voltage ($E$) and resistance ($R$): If we only know current ($I$) and resistance ($R$): A historical note: it was James Prescott Joule, not Georg Simon Ohm, who first discovered the mathematical relationship between power dissipation and current through a resistance. This discovery, published in 1841, followed the form of the last equation (P = I2R), and is properly known as Joule’s Law. However, these power equations are so commonly associated with the Ohm’s Law equations relating voltage, current, and resistance (E=IR ; I=E/R ; and R=E/I) that they are frequently credited to Ohm. Review • Power measured in watts, symbolized by the letter “W”. • Joule’s Law: P = I2R ; P = IE ; P = E2/R
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/02%3A_Ohm's_Law/2.02%3A_An_Analogy_for_Ohms_Law.txt
What is a Resistor? Special components called resistors are made for the express purpose of creating a precise quantity of resistance for insertion into a circuit. They are typically constructed of metal wire or carbon, and engineered to maintain a stable resistance value over a wide range of environmental conditions. Unlike lamps, they do not produce light, but they do produce heat as electric power is dissipated by them in a working circuit. Typically, though, the purpose of a resistor is not to produce usable heat, but simply to provide a precise quantity of electrical resistance. Resistor Schematic Symbols The most common schematic symbol for a resistor is a zig-zag line: Resistor values in ohms are usually shown as an adjacent number, and if several resistors are present in a circuit, they will be labeled with a unique identifier number such as R1, R2, R3, etc. As you can see, resistor symbols can be shown either horizontally or vertically: Real resistors look nothing like the zig-zag symbol. Instead, they look like small tubes or cylinders with two wires protruding for connection to a circuit. Here is a sampling of different kinds and sizes of resistors: In keeping more with their physical appearance, an alternative schematic symbol for a resistor looks like a small, rectangular box: Resistors can also be shown to have varying rather than fixed resistances. This might be for the purpose of describing an actual physical device designed for the purpose of providing an adjustable resistance, or it could be to show some component that just happens to have an unstable resistance: In fact, any time you see a component symbol drawn with a diagonal arrow through it, that component has a variable rather than a fixed value. This symbol “modifier” (the diagonal arrow) is standard electronic symbol convention. Variable Resistors Variable resistors must have some physical means of adjustment, either a rotating shaft or lever that can be moved to vary the amount of electrical resistance. Here is a photograph showing some devices called potentiometers, which can be used as variable resistors: Power Rating of Resistors Because resistors dissipate heat energy as the electric currents through them overcome the “friction” of their resistance, resistors are also rated in terms of how much heat energy they can dissipate without overheating and sustaining damage. Naturally, this power rating is specified in the physical unit of “watts.” Most resistors found in small electronic devices such as portable radios are rated at 1/4 (0.25) watt or less. The power rating of any resistor is roughly proportional to its physical size. Note in the first resistor photograph how the power ratings relate with size: the bigger the resistor, the higher its power dissipation rating. Also, note how resistances (in ohms) have nothing to do with size! Although it may seem pointless now to have a device doing nothing but resisting electric current, resistors are extremely useful devices in circuits. Because they are simple and so commonly used throughout the world of electricity and electronics, we’ll spend a considerable amount of time analyzing circuits composed of nothing but resistors and batteries. How are Resistors Useful? For a practical illustration of resistors’ usefulness, examine the photograph below. It is a picture of a printed circuit board, or PCB: an assembly made of sandwiched layers of insulating phenolic fiber-board and conductive copper strips, into which components may be inserted and secured by a low-temperature welding process called “soldering.” The various components on this circuit board are identified by printed labels. Resistors are denoted by any label beginning with the letter “R”. This particular circuit board is a computer accessory called a “modem,” which allows digital information transfer over telephone lines. There are at least a dozen resistors (all rated at 1/4 watt power dissipation) that can be seen on this modem’s board. Every one of the black rectangles (called “integrated circuits” or “chips”) contain their own array of resistors for their internal functions, as well. Another circuit board example shows resistors packaged in even smaller units, called “surface mount devices.” This particular circuit board is the underside of a personal computer hard disk drive, and once again the resistors soldered onto it are designated with labels beginning with the letter “R”: There are over one hundred surface-mount resistors on this circuit board, and this count, of course, does not include the number of resistors internal to the black “chips.” These two photographs should convince anyone that resistors—devices that “merely” oppose the flow of electrons—are very important components in the realm of electronics! “Load” on Schematic Diagrams In schematic diagrams, resistor symbols are sometimes used to illustrate any general type of device in a circuit doing something useful with electrical energy. Any non-specific electrical device is generally called a load, so if you see a schematic diagram showing a resistor symbol labeled “load,” especially in a tutorial circuit diagram explaining some concept unrelated to the actual use of electrical power, that symbol may just be a kind of shorthand representation of something else more practical than a resistor. Analyzing Resistor Circuits To summarize what we’ve learned in this lesson, let’s analyze the following circuit, determining all that we can from the information given: All we’ve been given here to start with is the battery voltage (10 volts) and the circuit current (2 amps). We don’t know the resistor’s resistance in ohms or the power dissipated by it in watts. Surveying our array of Ohm’s Law equations, we find two equations that give us answers from known quantities of voltage and current: Inserting the known quantities of voltage (E) and current (I) into these two equations, we can determine circuit resistance (R) and power dissipation (P): For the circuit conditions of 10 volts and 2 amps, the resistor’s resistance must be 5 Ω. If we were designing a circuit to operate at these values, we would have to specify a resistor with a minimum power rating of 20 watts, or else it would overheat and fail. Resistor Materials Resistors can be found in a variety of different materials, each one with its own properties and specific areas of use. Most electrical engineers use the types found below: Wirewound (WW) Wire Wound Resistors are manufactured by winding resistance wire around a non-conductive core in a spiral. They are typically produced for high precision and power applications. The core is usually made of ceramic or fiberglass and the resistance wire is made of nickel-chromium alloy and are not suitable for applications with frequencies higher than 50kHz. Low noise and stability with respect to temperature variations are standard characteristics of Wire Wound Resistors. Resistance values are available from 0.1 up to 100 kW, with accuracies between 0.1% and 20%. Metal Film Nichrome or tantalum nitride are typically used for metal film resistors. A combination of a ceramic material and a metal typically make up the resistive material. The resistance value is changed by cutting a spiral pattern in them film, much like carbon film with a laser or abrasive. Metal film resistors are usually less stable over temperature than wire wound resistors, but handle higher frequencies better. Metal Oxide Film Metal oxide resistors use metal oxides such as tin oxide, making them slightly different from metal film resistors. These resistors are reliable and stable and operate at higher temperatures than metal film resistors. Because of this, metal oxide film resistors are used in applications that require high endurance. Foil Developed in the 1960’s, the foil resistor is still one of the most accurate and stable types of resistor that you’ll find and are used for applications with high precision requirements. A ceramic substrate that has a thin bulk metal foil cemented to it makes up the resistive element. Foil Resistors feature a very low temperature coefficient of resistance. Carbon Composition (CCR) Until the 1960s Carbon Composition Resistors were the standard for most applications. They are reliable, but not very accurate (their tolerance cannot be better than about 5%). A mixture of fine carbon particles and non-conductive ceramic material are used for the resistive element of CCR Resistors. The substance is molded into the shape of a cylinder and baked. The dimensions of the body and the ratio of carbon to ceramic material determine the resistance value. More carbon used in the process means there will be a lower resistance. CCR resistors are still useful for certain applications because of their ability to withstand high energy pulses, a good example application would be in a power supply. Carbon Film Carbon film resistors have a thin carbon film (with a spiral cut in the film to increase the resistive path) on an insulating cylindrical core. This allows for the resistance value to be more accurate and also increases the resistance value. Carbon film resistors are much more accurate than carbon composition resistors. Special carbon film resistors are used in applications that require high pulse stability. Key Performance Indicators (KPIs) The KPIs for each resistor material can be found below: Review Devices called resistors are built to provide precise amounts of resistance in electric circuits. Resistors are rated both in terms of their resistance (ohms) and their ability to dissipate heat energy (watts). • Resistor resistance ratings cannot be determined from the physical size of the resistor(s) in question, although approximate power ratings can. The larger the resistor is, the more power it can safely dissipate without suffering damage. • Any device that performs some useful task with electric power is generally known as a load. Sometimes resistor symbols are used in schematic diagrams to designate a non-specific load, rather than an actual resistor.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/02%3A_Ohm's_Law/2.05%3A_Resistors.txt
For instance, our previous circuit example with the 3 Ω lamp, we calculated current through the circuit by dividing voltage by resistance (I=E/R). With an 18 volt battery, our circuit current was 6 amps. Doubling the battery voltage to 36 volts resulted in a doubled current of 12 amps. All of this makes sense, of course, so long as the lamp continues to provide exactly the same amount of friction (resistance) to the flow of electrons through it: 3 Ω. However, reality is not always this simple. One of the phenomena explored in a later chapter is that of conductor resistance changing with temperature. In an incandescent lamp (the kind employing the principle of electric current heating a thin filament of wire to the point that it glows white-hot), the resistance of the filament wire will increase dramatically as it warms from room temperature to operating temperature. If we were to increase the supply voltage in a real lamp circuit, the resulting increase in current would cause the filament to increase temperature, which would in turn increase its resistance, thus preventing further increases in current without further increases in battery voltage. Consequently, voltage and current do not follow the simple equation “I=E/R” (with R assumed to be equal to 3 Ω) because an incandescent lamp’s filament resistance does not remain stable for different currents. The phenomenon of resistance changing with variations in temperature is one shared by almost all metals, of which most wires are made. For most applications, these changes in resistance are small enough to be ignored. In the application of metal lamp filaments, the change happens to be quite large. This is just one example of “nonlinearity” in electric circuits. It is by no means the only example. A “linear” function in mathematics is one that tracks a straight line when plotted on a graph. The simplified version of the lamp circuit with a constant filament resistance of 3 Ω generates a plot like this: The straight-line plot of current over voltage indicates that resistance is a stable, unchanging value for a wide range of circuit voltages and currents. In an “ideal” situation, this is the case. Resistors, which are manufactured to provide a definite, stable value of resistance, behave very much like the plot of values seen above. A mathematician would call their behavior “linear.” A more realistic analysis of a lamp circuit, however, over several different values of battery voltage would generate a plot of this shape: The plot is no longer a straight line. It rises sharply on the left, as voltage increases from zero to a low level. As it progresses to the right we see the line flattening out, the circuit requiring greater and greater increases in voltage to achieve equal increases in current. If we try to apply Ohm’s Law to find the resistance of this lamp circuit with the voltage and current values plotted above, we arrive at several different values. We could say that the resistance here is nonlinear, increasing with increasing current and voltage. The nonlinearity is caused by the effects of high temperature on the metal wire of the lamp filament. Another example of nonlinear current conduction is through gases such as air. At standard temperatures and pressures, air is an effective insulator. However, if the voltage between two conductors separated by an air gap is increased greatly enough, the air molecules between the gap will become “ionized,” having their electrons stripped off by the force of the high voltage between the wires. Once ionized, air (and other gases) become good conductors of electricity, allowing electron flow where none could exist prior to ionization. If we were to plot current over voltage on a graph as we did with the lamp circuit, the effect of ionization would be clearly seen as nonlinear: The graph shown is approximate for a small air gap (less than one inch). A larger air gap would yield a higher ionization potential, but the shape of the I/E curve would be very similar: practically no current until the ionization potential was reached, then substantial conduction after that. Incidentally, this is the reason lightning bolts exist as momentary surges rather than continuous flows of electrons. The voltage built up between the earth and clouds (or between different sets of clouds) must increase to the point where it overcomes the ionization potential of the air gap before the air ionizes enough to support a substantial flow of electrons. Once it does, the current will continue to conduct through the ionized air until the static charge between the two points depletes. Once the charge depletes enough so that the voltage falls below another threshold point, the air de-ionizes and returns to its normal state of extremely high resistance. Many solid insulating materials exhibit similar resistance properties: extremely high resistance to electron flow below some critical threshold voltage, then a much lower resistance at voltages beyond that threshold. Once a solid insulating material has been compromised by high-voltage breakdown, as it is called, it often does not return to its former insulating state, unlike most gases. It may insulate once again at low voltages, but its breakdown threshold voltage will have been decreased to some lower level, which may allow breakdown to occur more easily in the future. This is a common mode of failure in high-voltage wiring: insulation damage due to breakdown. Such failures may be detected through the use of special resistance meters employing high voltage (1000 volts or more). There are circuit components specifically engineered to provide nonlinear resistance curves, one of them being the varistor. Commonly manufactured from compounds such as zinc oxide or silicon carbide, these devices maintain high resistance across their terminals until a certain “firing” or “breakdown” voltage (equivalent to the “ionization potential” of an air gap) is reached, at which point their resistance decreases dramatically. Unlike the breakdown of an insulator, varistor breakdown is repeatable: that is, it is designed to withstand repeated breakdowns without failure. A picture of a varistor is shown here: There are also special gas-filled tubes designed to do much the same thing, exploiting the very same principle at work in the ionization of air by a lightning bolt. Other electrical components exhibit even stranger current/voltage curves than this. Some devices actually experience a decrease in current as the applied voltage increases. Because the slope of the current/voltage for this phenomenon is negative (angling down instead of up as it progresses from left to right), it is known as negative resistance. Most notably, high-vacuum electron tubes known as tetrodes and semiconductor diodes known as Esaki or tunnel diodes exhibit negative resistance for certain ranges of applied voltage. Ohm’s Law is not very useful for analyzing the behavior of components like these where resistance varies with voltage and current. Some have even suggested that “Ohm’s Law” should be demoted from the status of a “Law” because it is not universal. It might be more accurate to call the equation (R=E/I) a definition of resistance, befitting of a certain class of materials under a narrow range of conditions. For the benefit of the student, however, we will assume that resistances specified in example circuits are stable over a wide range of conditions unless otherwise specified. I just wanted to expose you to a little bit of the complexity of the real world, lest I give you the false impression that the whole of electrical phenomena could be summarized in a few simple equations. Review • The resistance of most conductive materials is stable over a wide range of conditions, but this is not true of all materials. • Any function that can be plotted on a graph as a straight line is called a linear function. For circuits with stable resistances, the plot of current over voltage is linear (I=E/R). • In circuits where resistance varies with changes in either voltage or current, the plot of current over voltage will be nonlinear (not a straight line). • A varistor is a component that changes resistance with the amount of voltage impressed across it. With little voltage across it, its resistance is high. Then, at a certain “breakdown” or “firing” voltage, its resistance decreases dramatically. • Negative resistance is where the current through a component actually decreases as the applied voltage across it is increased. Some electron tubes and semiconductor diodes (most notably, the tetrode tube and the Esaki, or tunnel diode, respectively) exhibit negative resistance over a certain range of voltages.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/02%3A_Ohm's_Law/2.06%3A_Nonlinear_Conduction.txt
So far, we’ve been analyzing single-battery, single-resistor circuits with no regard for the connecting wires between the components, so long as a complete circuit is formed. Does the wire length or circuit “shape” matter to our calculations? Let’s look at a couple of circuit configurations and find out: When we draw wires connecting points in a circuit, we usually assume those wires have negligible resistance. As such, they contribute no appreciable effect to the overall resistance of the circuit, and so the only resistance we have to contend with is the resistance in the components. In the above circuits, the only resistance comes from the 5 Ω resistors, so that is all we will consider in our calculations. In real life, metal wires actually do have resistance (and so do power sources!), but those resistances are generally so much smaller than the resistance present in the other circuit components that they can be safely ignored. Exceptions to this rule exist in power system wiring, where even very small amounts of conductor resistance can create significant voltage drops given normal (high) levels of current. If connecting wire resistance is very little or none, we can regard the connected points in a circuit as being electrically common. That is, points 1 and 2 in the above circuits may be physically joined close together or far apart, and it doesn’t matter for any voltage or resistance measurements relative to those points. The same goes for points 3 and 4. It is as if the ends of the resistor were attached directly across the terminals of the battery, so far as our Ohm’s Law calculations and voltage measurements are concerned. This is useful to know, because it means you can re-draw a circuit diagram or re-wire a circuit, shortening or lengthening the wires as desired without appreciably impacting the circuit’s function. All that matters is that the components attach to each other in the same sequence. It also means that voltage measurements between sets of “electrically common” points will be the same. That is, the voltage between points 1 and 4 (directly across the battery) will be the same as the voltage between points 2 and 3 (directly across the resistor). Take a close look at the following circuit, and try to determine which points are common to each other: Here, we only have 2 components excluding the wires: the battery and the resistor. Though the connecting wires take a convoluted path in forming a complete circuit, there are several electrically common points in the electrons’ path. Points 1, 2, and 3 are all common to each other, because they’re directly connected together by wire. The same goes for points 4, 5, and 6. The voltage between points 1 and 6 is 10 volts, coming straight from the battery. However, since points 5 and 4 are common to 6, and points 2 and 3 common to 1, that same 10 volts also exists between these other pairs of points: Since electrically common points are connected together by (zero resistance) wire, there is no significant voltage drop between them regardless of the amount of current conducted from one to the next through that connecting wire. Thus, if we were to read voltages between common points, we should show (practically) zero: This makes sense mathematically, too. With a 10 volt battery and a 5 Ω resistor, the circuit current will be 2 amps. With wire resistance being zero, the voltage drop across any continuous stretch of wire can be determined through Ohm’s Law as such: It should be obvious that the calculated voltage drop across any uninterrupted length of wire in a circuit where wire is assumed to have zero resistance will always be zero, no matter what the magnitude of current, since zero multiplied by anything equals zero. Because common points in a circuit will exhibit the same relative voltage and resistance measurements, wires connecting common points are often labeled with the same designation. This is not to say that the terminal connection points are labeled the same, just the connecting wires. Take this circuit as an example: Points 1, 2, and 3 are all common to each other, so the wire connecting point 1 to 2 is labeled the same (wire 2) as the wire connecting point 2 to 3 (wire 2). In a real circuit, the wire stretching from point 1 to 2 may not even be the same color or size as the wire connecting point 2 to 3, but they should bear the exact same label. The same goes for the wires connecting points 6, 5, and 4. Knowing that electrically common points have zero voltage drop between them is a valuable troubleshooting principles. If I measure for voltage between points in a circuit that are supposed to be common to each other, I should read zero. If, however, I read substantial voltage between those two points, then I know with certainty that they cannot be directly connected together. If those points are supposed to be electrically common but they register otherwise, then I know that there is an “open failure” between those points. One final note: for most practical purposes, wire conductors can be assumed to possess zero resistance from end to end. In reality, however, there will always be some small amount of resistance encountered along the length of a wire, unless its a superconducting wire. Knowing this, we need to bear in mind that the principles learned here about electrically common points are all valid to a large degree, but not to an absolute degree. That is, the rule that electrically common points are guaranteed to have zero voltage between them is more accurately stated as such: electrically common points will have very little voltage dropped between them. That small, virtually unavoidable trace of resistance found in any piece of connecting wire is bound to create a small voltage across the length of it as current is conducted through. So long as you understand that these rules are based upon ideal conditions, you won’t be perplexed when you come across some condition appearing to be an exception to the rule. Review • Connecting wires in a circuit are assumed to have zero resistance unless otherwise stated. • Wires in a circuit can be shortened or lengthened without impacting the circuit’s function—all that matters is that the components are attached to one another in the same sequence. • Points directly connected together in a circuit by zero resistance (wire) are considered to be electrically common. • Electrically common points, with zero resistance between them, will have zero voltage dropped between them, regardless of the magnitude of current (ideally). • The voltage or resistance readings referenced between sets of electrically common points will be the same. • These rules apply to ideal conditions, where connecting wires are assumed to possess absolutely zero resistance. In real life this will probably not be the case, but wire resistances should be low enough so that the general principles stated here still hold. 2.08: Polarity of voltage drops We can trace the direction that electrons will flow in the same circuit by starting at the negative (-) terminal and following through to the positive (+) terminal of the battery, the only source of voltage in the circuit. From this we can see that the electrons are moving counter-clockwise, from point 6 to 5 to 4 to 3 to 2 to 1 and back to 6 again. As the current encounters the 5 Ω resistance, voltage is dropped across the resistor’s ends. The polarity of this voltage drop is negative (-) at point 4 with respect to positive (+) at point 3. We can mark the polarity of the resistor’s voltage drop with these negative and positive symbols, in accordance with the direction of current (whichever end of the resistor the current is entering is negative with respect to the end of the resistor it is exiting: We could make our table of voltages a little more complete by marking the polarity of the voltage for each pair of points in this circuit: While it might seem a little silly to document polarity of voltage drop in this circuit, it is an important concept to master. It will be critically important in the analysis of more complex circuits involving multiple resistors and/or batteries. It should be understood that polarity has nothing to do with Ohm’s Law: there will never be negative voltages, currents, or resistance entered into any Ohm’s Law equations! There are other mathematical principles of electricity that do take polarity into account through the use of signs (+ or -), but not Ohm’s Law. Review • The polarity of the voltage drop across any resistive component is determined by the direction of electron flow through it: negative entering, and positive exiting.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/02%3A_Ohm's_Law/2.07%3A_Circuit_Wiring.txt
These same programs can be fantastic aids to the beginning student of electronics, allowing the exploration of ideas quickly and easily with no assembly of real circuits required. Of course, there is no substitute for actually building and testing real circuits, but computer simulations certainly assist in the learning process by allowing the student to experiment with changes and see the effects they have on circuits. Throughout this book, I’ll be incorporating computer printouts from circuit simulation frequently in order to illustrate important concepts. By observing the results of a computer simulation, a student can gain an intuitive grasp of circuit behavior without the intimidation of abstract mathematical analysis. To simulate circuits on computer, I make use of a particular program called SPICE, which works by describing a circuit to the computer by means of a listing of text. In essence, this listing is a kind of computer program in itself, and must adhere to the syntactical rules of the SPICE language. The computer is then used to process, or “run,” the SPICE program, which interprets the text listing describing the circuit and outputs the results of its detailed mathematical analysis, also in text form. Many details of using SPICE are described in volume 5 (“Reference”) of this book series for those wanting more information. Here, I’ll just introduce the basic concepts and then apply SPICE to the analysis of these simple circuits we’ve been reading about. First, we need to have SPICE installed on our computer. As a free program, it is commonly available on the internet for download, and in formats appropriate for many different operating systems. In this book, I use one of the earlier versions of SPICE: version 2G6, for its simplicity of use. Next, we need a circuit for SPICE to analyze. Let’s try one of the circuits illustrated earlier in the chapter. Here is its schematic diagram: This simple circuit consists of a battery and a resistor connected directly together. We know the voltage of the battery (10 volts) and the resistance of the resistor (5 Ω), but nothing else about the circuit. If we describe this circuit to SPICE, it should be able to tell us (at the very least), how much current we have in the circuit by using Ohm’s Law (I=E/R). SPICE cannot directly understand a schematic diagram or any other form of graphical description. SPICE is a text-based computer program, and demands that a circuit be described in terms of its constituent components and connection points. Each unique connection point in a circuit is described for SPICE by a “node” number. Points that are electrically common to each other in the circuit to be simulated are designated as such by sharing the same number. It might be helpful to think of these numbers as “wire” numbers rather than “node” numbers, following the definition given in the previous section. This is how the computer knows what’s connected to what: by the sharing of common wire, or node, numbers. In our example circuit, we only have two “nodes,” the top wire and the bottom wire. SPICE demands there be a node 0 somewhere in the circuit, so we’ll label our wires 0 and 1: In the above illustration, I’ve shown multiple “1” and “0” labels around each respective wire to emphasize the concept of common points sharing common node numbers, but still this is a graphic image, not a text description. SPICE needs to have the component values and node numbers given to it in text form before any analysis may proceed. Creating a text file in a computer involves the use of a program called a text editor. Similar to a word processor, a text editor allows you to type text and record what you’ve typed in the form of a file stored on the computer’s hard disk. Text editors lack the formatting ability of word processors (no italic, bold, or underlined characters), and this is a good thing, since programs such as SPICE wouldn’t know what to do with this extra information. If we want to create a plain-text file, with absolutely nothing recorded except the keyboard characters we select, a text editor is the tool to use. If using a Microsoft operating system such as DOS or Windows, a couple of text editors are readily available with the system. In DOS, there is the old Edit text editing program, which may be invoked by typing edit at the command prompt. In Windows (3.x/95/98/NT/Me/2k/XP), the Notepad text editor is your stock choice. Many other text editing programs are available, and some are even free. I happen to use a free text editor called Vim, and run it under both Windows 95 and Linux operating systems. It matters little which editor you use, so don’t worry if the screenshots in this section don’t look like yours; the important information here is what you type, not which editor you happen to use. To describe this simple, two-component circuit to SPICE, I will begin by invoking my text editor program and typing in a “title” line for the circuit: We can describe the battery to the computer by typing in a line of text starting with the letter “v” (for “Voltage source”), identifying which wire each terminal of the battery connects to (the node numbers), and the battery’s voltage, like this: This line of text tells SPICE that we have a voltage source connected between nodes 1 and 0, direct current (DC), 10 volts. That’s all the computer needs to know regarding the battery. Now we turn to the resistor: SPICE requires that resistors be described with a letter “r,” the numbers of the two nodes (connection points), and the resistance in ohms. Since this is a computer simulation, there is no need to specify a power rating for the resistor. That’s one nice thing about “virtual” components: they can’t be harmed by excessive voltages or currents! Now, SPICE will know there is a resistor connected between nodes 1 and 0 with a value of 5 Ω. This very brief line of text tells the computer we have a resistor (”r”) connected between the same two nodes as the battery (1 and 0), with a resistance value of 5 Ω. If we add an .end statement to this collection of SPICE commands to indicate the end of the circuit description, we will have all the information SPICE needs, collected in one file and ready for processing. This circuit description, comprised of lines of text in a computer file, is technically known as a netlist, or deck: Once we have finished typing all the necessary SPICE commands, we need to “save” them to a file on the computer’s hard disk so that SPICE has something to reference to when invoked. Since this is my first SPICE netlist, I’ll save it under the filename “circuit1.cir” (the actual name being arbitrary). You may elect to name your first SPICE netlist something completely different, just as long as you don’t violate any filename rules for your operating system, such as using no more than 8+3 characters (eight characters in the name, and three characters in the extension: 12345678.123) in DOS. To invoke SPICE (tell it to process the contents of the circuit1.cir netlist file), we have to exit from the text editor and access a command prompt (the “DOS prompt” for Microsoft users) where we can enter text commands for the computer’s operating system to obey. This “primitive” way of invoking a program may seem archaic to computer users accustomed to a “point-and-click” graphical environment, but it is a very powerful and flexible way of doing things. Remember, what you’re doing here by using SPICE is a simple form of computer programming, and the more comfortable you become in giving the computer text-form commands to follow—as opposed to simply clicking on icon images using a mouse—the more mastery you will have over your computer. Once at a command prompt, type in this command, followed by an [Enter] keystroke (this example uses the filename circuit1.cir; if you have chosen a different filename for your netlist file, substitute it): Here is how this looks on my computer (running the Linux operating system), just before I press the [Enter] key: As soon as you press the [Enter] key to issue this command, text from SPICE’s output should scroll by on the computer screen. Here is a screenshot showing what SPICE outputs on my computer (I’ve lengthened the “terminal” window to show you the full text. With a normal-size terminal, the text easily exceeds one page length): SPICE begins with a reiteration of the netlist, complete with title line and .end statement. About halfway through the simulation it displays the voltage at all nodes with reference to node 0. In this example, we only have one node other than node 0, so it displays the voltage there: 10.0000 volts. Then it displays the current through each voltage source. Since we only have one voltage source in the entire circuit, it only displays the current through that one. In this case, the source current is 2 amps. Due to a quirk in the way SPICE analyzes current, the value of 2 amps is output as a negative (-) 2 amps. The last line of text in the computer’s analysis report is “total power dissipation,” which in this case is given as “2.00E+01” watts: 2.00 x 101, or 20 watts. SPICE outputs most figures in scientific notation rather than normal (fixed-point) notation. While this may seem to be more confusing at first, it is actually less confusing when very large or very small numbers are involved. The details of scientific notation will be covered in the next chapter of this book. One of the benefits of using a “primitive” text-based program such as SPICE is that the text files dealt with are extremely small compared to other file formats, especially graphical formats used in other circuit simulation software. Also, the fact that SPICE’s output is plain text means you can direct SPICE’s output to another text file where it may be further manipulated. To do this, we re-issue a command to the computer’s operating system to invoke SPICE, this time redirecting the output to a file I’ll call “output.txt”: SPICE will run “silently” this time, without the stream of text output to the computer screen as before. A new file, output1.txt, will be created, which you may open and change using a text editor or word processor. For this illustration, I’ll use the same text editor (Vim) to open this file: Now, I may freely edit this file, deleting any extraneous text (such as the “banners” showing date and time), leaving only the text that I feel to be pertinent to my circuit’s analysis: Once suitably edited and re-saved under the same filename (output.txt in this example), the text may be pasted into any kind of document, “plain text” being a universal file format for almost all computer systems. I can even include it directly in the text of this book—rather than as a “screenshot” graphic image—like this: Incidentally, this is the preferred format for text output from SPICE simulations in this book series: as real text, not as graphic screenshot images. To alter a component value in the simulation, we need to open up the netlist file (circuit1.cir) and make the required modifications in the text description of the circuit, then save those changes to the same filename, and re-invoke SPICE at the command prompt. This process of editing and processing a text file is one familiar to every computer programmer. One of the reasons I like to teach SPICE is that it prepares the learner to think and work like a computer programmer, which is good because computer programming is a significant area of advanced electronics work. Earlier we explored the consequences of changing one of the three variables in an electric circuit (voltage, current, or resistance) using Ohm’s Law to mathematically predict what would happen. Now let’s try the same thing using SPICE to do the math for us. If we were to triple the voltage in our last example circuit from 10 to 30 volts and keep the circuit resistance unchanged, we would expect the current to triple as well. Let’s try this, re-naming our netlist file so as to not over-write the first file. This way, we will have both versions of the circuit simulation stored on the hard drive of our computer for future use. The following text listing is the output of SPICE for this modified netlist, formatted as plain text rather than as a graphic image of my computer screen: Just as we expected, the current tripled with the voltage increase. Current used to be 2 amps, but now it has increased to 6 amps (-6.000 x 100). Note also how the total power dissipation in the circuit has increased. It was 20 watts before, but now is 180 watts (1.8 x 102). Recalling that power is related to the square of the voltage (Joule’s Law: P=E2/R), this makes sense. If we triple the circuit voltage, the power should increase by a factor of nine (32 = 9). Nine times 20 is indeed 180, so SPICE’s output does indeed correlate with what we know about power in electric circuits. If we want to see how this simple circuit would respond over a wide range of battery voltages, we can invoke some of the more advanced options within SPICE. Here, I’ll use the “.dc” analysis option to vary the battery voltage from 0 to 100 volts in 5 volt increments, printing out the circuit voltage and current at every step. The lines in the SPICE netlist beginning with a star symbol (”*”) are comments. That is, they don’t tell the computer to do anything relating to circuit analysis, but merely serve as notes for any human being reading the netlist text. The .print command in this SPICE netlist instructs SPICE to print columns of numbers corresponding to each step in the analysis: If I re-edit the netlist file, changing the .print command into a .plot command, SPICE will output a crude graph made up of text characters: In both output formats, the left-hand column of numbers represents the battery voltage at each interval, as it increases from 0 volts to 100 volts, 5 volts at a time. The numbers in the right-hand column indicate the circuit current for each of those voltages. Look closely at those numbers and you’ll see the proportional relationship between each pair: Ohm’s Law (I=E/R) holds true in each and every case, each current value being 1/5 the respective voltage value, because the circuit resistance is exactly 5 Ω. Again, the negative numbers for current in this SPICE analysis is more of a quirk than anything else. Just pay attention to the absolute value of each number unless otherwise specified. There are even some computer programs able to interpret and convert the non-graphical data output by SPICE into a graphical plot. One of these programs is called Nutmeg, and its output looks something like this: Note how Nutmeg plots the resistor voltage v(1) (voltage between node 1 and the implied reference point of node 0) as a line with a positive slope (from lower-left to upper-right). Whether or not you ever become proficient at using SPICE is not relevant to its application in this book. All that matters is that you develop an understanding for what the numbers mean in a SPICE-generated report. In the examples to come, I’ll do my best to annotate the numerical results of SPICE to eliminate any confusion, and unlock the power of this amazing tool to help you understand the behavior of electric circuits.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/02%3A_Ohm's_Law/2.09%3A_Computer_Simulation_of_Electric_Circuits.txt
• 3.1: The Importance of Electrical Safety • 3.2: Physiological Effects of Electricity Most of us have experienced some form of electric “shock,” where electricity causes our body to experience pain or trauma. If we are fortunate, the extent of that experience is limited to tingles or jolts of pain from static electricity buildup discharging through our bodies. When we are working around electric circuits capable of delivering high power to loads, electric shock becomes a much more serious issue, and pain is the least significant result of shock. • 3.3: Shock Current Path As we’ve already learned, electricity requires a complete path (circuit) to continuously flow. This is why the shock received from static electricity is only a momentary jolt: the flow of electrons is necessarily brief when static charges are equalized between two objects. Shocks of self-limited duration like this are rarely hazardous. • 3.4: Ohm’s Law (again) A common phrase heard in reference to electrical safety goes something like this: “It’s not voltage that kills, its current!” While there is an element of truth to this, there’s more to understand about shock hazard than this simple adage. If voltage presented no danger, no one would ever print and display signs saying: DANGER—HIGH VOLTAGE! • 3.5: Safe Practices If at all possible, shut off the power to a circuit before performing any work on it. You must secure all sources of harmful energy before a system may be considered safe to work on. In industry, securing a circuit, device, or system in this condition is commonly known as placing it in a Zero Energy State. The focus of this lesson is, of course, electrical safety. However, many of these principles apply to non-electrical systems as well. • 3.6: Emergency Response Despite lock-out/tag-out procedures and multiple repetitions of electrical safety rules in industry, accidents still do occur. The vast majority of the time, these accidents are the result of not following proper safety procedures. But however they may occur, they still do happen, and anyone working around electrical systems should be aware of what needs to be done for a victim of electrical shock. • 3.7: Common Sources of Hazard Of course there is danger of electrical shock when directly performing manual work on an electrical power system. However, electric shock hazards exist in many other places, thanks to the widespread use of electric power in our lives. As we saw earlier, skin and body resistance has a lot to do with the relative hazard of electric circuits. The higher the body’s resistance, the less likely harmful current will result from any given amount of voltage. Conversely, the lower the body’s resistance, • 3.8: Safe Circuit Design As we saw earlier, a power system with no secure connection to earth ground is unpredictable from a safety perspective: there’s no way to guarantee how much or how little voltage will exist between any point in the circuit and earth ground. By grounding one side of the power system’s voltage source, at least one point in the circuit can be assured to be electrically common with the earth and therefore present no shock hazard. In a simple two-wire electrical power system, the conductor connected • 3.9: Safe Meter Usage Using an electrical meter safely and efficiently is perhaps the most valuable skill an electronics technician can master, both for the sake of their own personal safety and for proficiency at their trade. It can be daunting at first to use a meter, knowing that you are connecting it to live circuits which may harbor life-threatening levels of voltage and current. This concern is not unfounded, and it is always best to proceed cautiously when using meters. Carelessness more than any other factor • 3.10: Electric Shock Data The table of electric currents and their various bodily effects was obtained from online (Internet) sources: the safety page of Massachusetts Institute of Technology (website: [*]), and a safety handbook published by Cooper Bussmann, Inc (website: [*]). In the Bussmann handbook, the table is appropriately entitled Deleterious Effects of Electric Shock, and credited to a Mr. Charles F. Dalziel. Further research revealed Dalziel to be both a scientific pioneer and an authority on the effects of el 03: Electrical Safety With this lesson, I hope to avoid a common mistake found in electronics textbooks of either ignoring or not covering with sufficient detail the subject of electrical safety. I assume that whoever reads this book has at least a passing interest in actually working with electricity, and as such the topic of safety is of paramount importance. Those authors, editors, and publishers who fail to incorporate this subject into their introductory texts are depriving the reader of life-saving information. As an instructor of industrial electronics, I spend a full week with my students reviewing the theoretical and practical aspects of electrical safety. The same textbooks I found lacking in technical clarity I also found lacking in coverage of electrical safety, hence the creation of this chapter. Its placement after the first two chapters is intentional: in order for the concepts of electrical safety to make the most sense, some foundational knowledge of electricity is necessary. Another benefit of including a detailed lesson on electrical safety is the practical context it sets for basic concepts of voltage, current, resistance, and circuit design. The more relevant a technical topic can be made, the more likely a student will be to pay attention and comprehend. And what could be more relevant than application to your own personal safety? Also, with electrical power being such an everyday presence in modern life, almost anyone can relate to the illustrations given in such a lesson. Have you ever wondered why birds don’t get shocked while resting on power lines? Read on and find out!
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/03%3A_Electrical_Safety/3.01%3A_The_Importance_of_Electrical_Safety.txt
As electric current is conducted through a material, any opposition to that flow of electrons (resistance) results in a dissipation of energy, usually in the form of heat. This is the most basic and easy-to-understand effect of electricity on living tissue: current makes it heat up. If the amount of heat generated is sufficient, the tissue may be burnt. The effect is physiologically the same as damage caused by an open flame or other high-temperature source of heat, except that electricity has the ability to burn tissue well beneath the skin of a victim, even burning internal organs. Another effect of electric current on the body, perhaps the most significant in terms of hazard, regards the nervous system. By “nervous system” I mean the network of special cells in the body called “nerve cells” or “neurons” which process and conduct the multitude of signals responsible for regulation of many body functions. The brain, spinal cord, and sensory/motor organs in the body function together to allow it to sense, move, respond, think, and remember. Nerve cells communicate to each other by acting as “transducers:” creating electrical signals (very small voltages and currents) in response to the input of certain chemical compounds called neurotransmitters, and releasing neurotransmitters when stimulated by electrical signals. If electric current of sufficient magnitude is conducted through a living creature (human or otherwise), its effect will be to override the tiny electrical impulses normally generated by the neurons, overloading the nervous system and preventing both reflex and volitional signals from being able to actuate muscles. Muscles triggered by an external (shock) current will involuntarily contract, and there’s nothing the victim can do about it. This problem is especially dangerous if the victim contacts an energized conductor with his or her hands. The forearm muscles responsible for bending fingers tend to be better developed than those muscles responsible for extending fingers, and so if both sets of muscles try to contract because of an electric current conducted through the person’s arm, the “bending” muscles will win, clenching the fingers into a fist. If the conductor delivering current to the victim faces the palm of his or her hand, this clenching action will force the hand to grasp the wire firmly, thus worsening the situation by securing excellent contact with the wire. The victim will be completely unable to let go of the wire. Medically, this condition of involuntary muscle contraction is called tetanus. Electricians familiar with this effect of electric shock often refer to an immobilized victim of electric shock as being “froze on the circuit.” Shock-induced tetanus can only be interrupted by stopping the current through the victim. Even when the current is stopped, the victim may not regain voluntary control over their muscles for a while, as the neurotransmitter chemistry has been thrown into disarray. This principle has been applied in “stun gun” devices such as Tasers, which on the principle of momentarily shocking a victim with a high-voltage pulse delivered between two electrodes. A well-placed shock has the effect of temporarily (a few minutes) immobilizing the victim. Electric current is able to affect more than just skeletal muscles in a shock victim, however. The diaphragm muscle controlling the lungs, and the heart—which is a muscle in itself—can also be “frozen” in a state of tetanus by electric current. Even currents too low to induce tetanus are often able to scramble nerve cell signals enough that the heart cannot beat properly, sending the heart into a condition known as fibrillation. A fibrillating heart flutters rather than beats, and is ineffective at pumping blood to vital organs in the body. In any case, death from asphyxiation and/or cardiac arrest will surely result from a strong enough electric current through the body. Ironically, medical personnel use a strong jolt of electric current applied across the chest of a victim to “jump start” a fibrillating heart into a normal beating pattern. That last detail leads us into another hazard of electric shock, this one peculiar to public power systems. Though our initial study of electric circuits will focus almost exclusively on DC (Direct Current, or electricity that moves in a continuous direction in a circuit), modern power systems utilize alternating current, or AC. The technical reasons for this preference of AC over DC in power systems are irrelevant to this discussion, but the special hazards of each kind of electrical power are very important to the topic of safety. How AC affects the body depends largely on frequency. Low-frequency (50- to 60-Hz) AC is used in US (60 Hz) and European (50 Hz) households; it can be more dangerous than high-frequency AC and is 3 to 5 times more dangerous than DC of the same voltage and amperage. Low-frequency AC produces extended muscle contraction (tetany), which may freeze the hand to the current’s source, prolonging exposure. DC is most likely to cause a single convulsive contraction, which often forces the victim away from the current’s source. [MMOM] AC’s alternating nature has a greater tendency to throw the heart’s pacemaker neurons into a condition of fibrillation, whereas DC tends to just make the heart stand still. Once the shock current is halted, a “frozen” heart has a better chance of regaining a normal beat pattern than a fibrillating heart. This is why “defibrillating” equipment used by emergency medics works: the jolt of current supplied by the defibrillator unit is DC, which halts fibrillation and gives the heart a chance to recover. In either case, electric currents high enough to cause involuntary muscle action are dangerous and are to be avoided at all costs. In the next section, we’ll take a look at how such currents typically enter and exit the body, and examine precautions against such occurrences. Review • Electric current is capable of producing deep and severe burns in the body due to power dissipation across the body’s electrical resistance. • Tetanus is the condition where muscles involuntarily contract due to the passage of external electric current through the body. When involuntary contraction of muscles controlling the fingers causes a victim to be unable to let go of an energized conductor, the victim is said to be “froze on the circuit.” • Diaphragm (lung) and heart muscles are similarly affected by electric current. Even currents too small to induce tetanus can be strong enough to interfere with the heart’s pacemaker neurons, causing the heart to flutter instead of strongly beat. • Direct current (DC) is more likely to cause muscle tetanus than alternating current (AC), making DC more likely to “freeze” a victim in a shock scenario. However, AC is more likely to cause a victim’s heart to fibrillate, which is a more dangerous condition for the victim after the shocking current has been halted.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/03%3A_Electrical_Safety/3.02%3A_Physiological_Effects_of_Electricity.txt
Without two contact points on the body for current to enter and exit, respectively, there is no hazard of shock. This is why birds can safely rest on high-voltage power lines without getting shocked: they make contact with the circuit at only one point. In order for electrons to flow through a conductor, there must be a voltage present to motivate them. Voltage, as you should recall, is always relative between two points. There is no such thing as voltage “on” or “at” a single point in the circuit, and so the bird contacting a single point in the above circuit has no voltage applied across its body to establish a current through it. Yes, even though they rest on two feet, both feet are touching the same wire, making them electrically common. Electrically speaking, both of the bird’s feet touch the same point, hence there is no voltage between them to motivate current through the bird’s body. This might lend one to believe that its impossible to be shocked by electricity by only touching a single wire. Like the birds, if we’re sure to touch only one wire at a time, we’ll be safe, right? Unfortunately, this is not correct. Unlike birds, people are usually standing on the ground when they contact a “live” wire. Many times, one side of a power system will be intentionally connected to earth ground, and so the person touching a single wire is actually making contact between two points in the circuit (the wire and earth ground): The ground symbol is that set of three horizontal bars of decreasing width located at the lower-left of the circuit shown, and also at the foot of the person being shocked. In real life the power system ground consists of some kind of metallic conductor buried deep in the ground for making maximum contact with the earth. That conductor is electrically connected to an appropriate connection point on the circuit with thick wire. The victim’s ground connection is through their feet, which are touching the earth. A few questions usually arise at this point in the mind of the student: • If the presence of a ground point in the circuit provides an easy point of contact for someone to get shocked, why have it in the circuit at all? Wouldn’t a ground-less circuit be safer? • The person getting shocked probably isn’t bare-footed. If rubber and fabric are insulating materials, then why aren’t their shoes protecting them by preventing a circuit from forming? • How good of a conductor can dirt be? If you can get shocked by current through the earth, why not use the earth as a conductor in our power circuits? In answer to the first question, the presence of an intentional “grounding” point in an electric circuit is intended to ensure that one side of it is safe to come in contact with. Note that if our victim in the above diagram were to touch the bottom side of the resistor, nothing would happen even though their feet would still be contacting ground: Because the bottom side of the circuit is firmly connected to ground through the grounding point on the lower-left of the circuit, the lower conductor of the circuit is made electrically common with earth ground. Since there can be no voltage between electrically common points, there will be no voltage applied across the person contacting the lower wire, and they will not receive a shock. For the same reason, the wire connecting the circuit to the grounding rod/plates is usually left bare (no insulation), so that any metal object it brushes up against will similarly be electrically common with the earth. Circuit grounding ensures that at least one point in the circuit will be safe to touch. But what about leaving a circuit completely ungrounded? Wouldn’t that make any person touching just a single wire as safe as the bird sitting on just one? Ideally, yes. Practically, no. Observe what happens with no ground at all: Despite the fact that the person’s feet are still contacting ground, any single point in the circuit should be safe to touch. Since there is no complete path (circuit) formed through the person’s body from the bottom side of the voltage source to the top, there is no way for a current to be established through the person. However, this could all change with an accidental ground, such as a tree branch touching a power line and providing connection to earth ground: Such an accidental connection between a power system conductor and the earth (ground) is called a ground fault. Ground faults may be caused by many things, including dirt buildup on power line insulators (creating a dirty-water path for current from the conductor to the pole, and to the ground, when it rains), ground water infiltration in buried power line conductors, and birds landing on power lines, bridging the line to the pole with their wings. Given the many causes of ground faults, they tend to be unpredicatable. In the case of trees, no one can guarantee which wire their branches might touch. If a tree were to brush up against the top wire in the circuit, it would make the top wire safe to touch and the bottom one dangerous—just the opposite of the previous scenario where the tree contacts the bottom wire: With a tree branch contacting the top wire, that wire becomes the grounded conductor in the circuit, electrically common with earth ground. Therefore, there is no voltage between that wire and ground, but full (high) voltage between the bottom wire and ground. As mentioned previously, tree branches are only one potential source of ground faults in a power system. Consider an ungrounded power system with no trees in contact, but this time with two people touching single wires: With each person standing on the ground, contacting different points in the circuit, a path for shock current is made through one person, through the earth, and through the other person. Even though each person thinks they’re safe in only touching a single point in the circuit, their combined actions create a deadly scenario. In effect, one person acts as the ground fault which makes it unsafe for the other person. This is exactly why ungrounded power systems are dangerous: the voltage between any point in the circuit and ground (earth) is unpredictable, because a ground fault could appear at any point in the circuit at any time. The only character guaranteed to be safe in these scenarios is the bird, who has no connection to earth ground at all! By firmly connecting a designated point in the circuit to earth ground (“grounding” the circuit), at least safety can be assured at that one point. This is more assurance of safety than having no ground connection at all. In answer to the second question, rubber-soled shoes do indeed provide some electrical insulation to help protect someone from conducting shock current through their feet. However, most common shoe designs are not intended to be electrically “safe,” their soles being too thin and not of the right substance. Also, any moisture, dirt, or conductive salts from body sweat on the surface of or permeated through the soles of shoes will compromise what little insulating value the shoe had to begin with. There are shoes specifically made for dangerous electrical work, as well as thick rubber mats made to stand on while working on live circuits, but these special pieces of gear must be in absolutely clean, dry condition in order to be effective. Suffice it to say, normal footwear is not enough to guarantee protection against electric shock from a power system. Research conducted on contact resistance between parts of the human body and points of contact (such as the ground) shows a wide range of figures (see end of chapter for information on the source of this data): • Hand or foot contact, insulated with rubber: 20 MΩ typical. • Foot contact through leather shoe sole (dry): 100 kΩ to 500 kΩ • Foot contact through leather shoe sole (wet): 5 kΩ to 20 kΩ As you can see, not only is rubber a far better insulating material than leather, but the presence of water in a porous substance such as leather greatly reduces electrical resistance. In answer to the third question, dirt is not a very good conductor (at least not when its dry!). It is too poor of a conductor to support continuous current for powering a load. However, as we will see in the next section, it takes very little current to injure or kill a human being, so even the poor conductivity of dirt is enough to provide a path for deadly current when there is sufficient voltage available, as there usually is in power systems. Some ground surfaces are better insulators than others. Asphalt, for instance, being oil-based, has a much greater resistance than most forms of dirt or rock. Concrete, on the other hand, tends to have fairly low resistance due to its intrinsic water and electrolyte (conductive chemical) content. Review • Electric shock can only occur when contact is made between two points of a circuit; when voltage is applied across a victim’s body. • Power circuits usually have a designated point that is “grounded:” firmly connected to metal rods or plates buried in the dirt to ensure that one side of the circuit is always at ground potential (zero voltage between that point and earth ground). • A ground fault is an accidental connection between a circuit conductor and the earth (ground). • Special, insulated shoes and mats are made to protect persons from shock via ground conduction, but even these pieces of gear must be in clean, dry condition to be effective. Normal footwear is not good enough to provide protection from shock by insulating its wearer from the earth. • Though dirt is a poor conductor, it can conduct enough current to injure or kill a human being.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/03%3A_Electrical_Safety/3.03%3A_Shock_Current_Path.txt
The principle that “current kills” is essentially correct. It is electric current that burns tissue, freezes muscles, and fibrillates hearts. However, electric current doesn’t just occur on its own: there must be voltage available to motivate electrons to flow through a victim. A person’s body also presents resistance to current, which must be taken into account. Taking Ohm’s Law for voltage, current, and resistance, and expressing it in terms of current for a given voltage and resistance, we have this equation: The amount of current through a body is equal to the amount of voltage applied between two points on that body, divided by the electrical resistance offered by the body between those two points. Obviously, the more voltage available to cause electrons to flow, the easier they will flow through any given amount of resistance. Hence, the danger of high voltage: high voltage means potential for large amounts of current through your body, which will injure or kill you. Conversely, the more resistance a body offers to current, the slower electrons will flow for any given amount of voltage. Just how much voltage is dangerous depends on how much total resistance is in the circuit to oppose the flow of electrons. Body resistance is not a fixed quantity. It varies from person to person and from time to time. There’s even a body fat measurement technique based on a measurement of electrical resistance between a person’s toes and fingers. Differing percentages of body fat give provide different resistances: just one variable affecting electrical resistance in the human body. In order for the technique to work accurately, the person must regulate their fluid intake for several hours prior to the test, indicating that body hydration is another factor impacting the body’s electrical resistance. Body resistance also varies depending on how contact is made with the skin: is it from hand-to-hand, hand-to-foot, foot-to-foot, hand-to-elbow, etc.? Sweat, being rich in salts and minerals, is an excellent conductor of electricity for being a liquid. So is blood, with its similarly high content of conductive chemicals. Thus, contact with a wire made by a sweaty hand or open wound will offer much less resistance to current than contact made by clean, dry skin. Measuring electrical resistance with a sensitive meter, I measure approximately 1 million ohms of resistance (1 MΩ) between my two hands, holding on to the meter’s metal probes between my fingers. The meter indicates less resistance when I squeeze the probes tightly and more resistance when I hold them loosely. Sitting here at my computer, typing these words, my hands are clean and dry. If I were working in some hot, dirty, industrial environment, the resistance between my hands would likely be much less, presenting less opposition to deadly current, and a greater threat of electrical shock. But how much current is harmful? The answer to that question also depends on several factors. Individual body chemistry has a significant impact on how electric current affects an individual. Some people are highly sensitive to current, experiencing involuntary muscle contraction with shocks from static electricity. Others can draw large sparks from discharging static electricity and hardly feel it, much less experience a muscle spasm. Despite these differences, approximate guidelines have been developed through tests which indicate very little current being necessary to manifest harmful effects (again, see the end of the chapter for information on the source of this data). All current figures given in milliamps (a milliamp is equal to 1/1000 of an amp): A table of the effects of electricity on the body “Hz” stands for the unit of Hertz, the measure of how rapidly alternating current alternates, a measure otherwise known as frequency. So, the column of figures labeled “60 Hz AC” refers to current that alternates at a frequency of 60 cycles (1 cycle = period of time where electrons flow one direction, then the other direction) per second. The last column, labeled “10 kHz AC,” refers to alternating current that completes ten thousand (10,000) back-and-forth cycles each and every second. Keep in mind that these figures are only approximate, as individuals with different body chemistry may react differently. It has been suggested that an across-the-chest current of only 17 milliamps AC is enough to induce fibrillation in a human subject under certain conditions. Most of our data regarding induced fibrillation comes from animal testing. Obviously, it is not practical to perform tests of induced ventricular fibrillation on human subjects, so the available data is sketchy. Oh, and in case you’re wondering, I have no idea why women tend to be more susceptible to electric currents than men! Suppose I were to place my two hands across the terminals of an AC voltage source at 60 Hz (60 cycles, or alternations back-and-forth, per second). How much voltage would be necessary in this clean, dry state of skin condition to produce a current of 20 milliamps (enough to cause me to become unable to let go of the voltage source)? We can use Ohm’s Law (E=IR) to determine this:necessary in this clean, dry state of skin condition to produce a current of 20 milliamps (enough to cause me to become unable to let go of the voltage source)? We can use Ohm’s Law (E=IR) to determine this: E = IR E = (20 mA)(1 MΩ) E = 20,000 volts, or 20 kV Bear in mind that this is a “best case” scenario (clean, dry skin) from the standpoint of electrical safety, and that this figure for voltage represents the amount necessary to induce tetanus. Far less would be required to cause a painful shock! Also keep in mind that the physiological effects of any particular amount of current can vary significantly from person to person, and that these calculations are rough estimates only. With water sprinkled on my fingers to simulate sweat, I was able to measure a hand-to-hand resistance of only 17,000 ohms (17 kΩ). Bear in mind this is only with one finger of each hand contacting a thin metal wire. Recalculating the voltage required to cause a current of 20 milliamps, we obtain this figure: E = IR E = (20 mA)(17 kΩ) E = 340 volts In this realistic condition, it would only take 340 volts of potential from one of my hands to the other to cause 20 milliamps of current. However, it is still possible to receive a deadly shock from less voltage than this. Provided a much lower body resistance figure augmented by contact with a ring (a band of gold wrapped around the circumference of one’s finger makes an excellent contact point for electrical shock) or full contact with a large metal object such as a pipe or metal handle of a tool, the body resistance figure could drop as low as 1,000 ohms (1 kΩ), allowing an even lower voltage to present a potential hazard: E = IR E = (20 mA)(1 kΩ) E = 20 volts Notice that in this condition, 20 volts is enough to produce a current of 20 milliamps through a person: enough to induce tetanus. Remember, it has been suggested a current of only 17 milliamps may induce ventricular (heart) fibrillation. With a hand-to-hand resistance of 1000 Ω, it would only take 17 volts to create this dangerous condition: E = IR E = (17 mA)(1 kΩ) E = 17 volts Seventeen volts is not very much as far as electrical systems are concerned. Granted, this is a “worst-case” scenario with 60 Hz AC voltage and excellent bodily conductivity, but it does stand to show how little voltage may present a serious threat under certain conditions. The conditions necessary to produce 1,000 Ω of body resistance don’t have to be as extreme as what was presented, either (sweaty skin with contact made on a gold ring). Body resistance may decrease with the application of voltage (especially if tetanus causes the victim to maintain a tighter grip on a conductor) so that with constant voltage a shock may increase in severity after initial contact. What begins as a mild shock—just enough to “freeze” a victim so they can’t let go—may escalate into something severe enough to kill them as their body resistance decreases and current correspondingly increases. Research has provided an approximate set of figures for electrical resistance of human contact points under different conditions (see the end of the chapter for information on the source of this data): • Wire touched by finger: 40,000 Ω to 1,000,000 Ω dry, 4,000 Ω to 15,000 Ω wet. • Wire held by hand: 15,000 Ω to 50,000 Ω dry, 3,000 Ω to 5,000 Ω wet. • Metal pliers held by hand: 5,000 Ω to 10,000 Ω dry, 1,000 Ω to 3,000 Ω wet. • Contact with palm of hand: 3,000 Ω to 8,000 Ω dry, 1,000 Ω to 2,000 Ω wet. • 1.5 inch metal pipe grasped by one hand: 1,000 Ω to 3,000 Ω dry, 500 Ω to 1,500 Ω wet. • 1.5 inch metal pipe grasped by two hands: 500 Ω to 1,500 kΩ dry, 250 Ω to 750 Ω wet. • Hand immersed in conductive liquid: 200 Ω to 500 Ω. • Foot immersed in conductive liquid: 100 Ω to 300 Ω. Note the resistance values of the two conditions involving a 1.5 inch metal pipe. The resistance measured with two hands grasping the pipe is exactly one-half the resistance of one hand grasping the pipe. With two hands, the bodily contact area is twice as great as with one hand. This is an important lesson to learn: electrical resistance between any contacting objects diminishes with increased contact area, all other factors being equal. With two hands holding the pipe, electrons have two, parallel routes through which to flow from the pipe to the body (or vice-versa). As we will see in a later chapter, parallel circuit pathways always result in less overall resistance than any single pathway considered alone. In industry, 30 volts is generally considered to be a conservative threshold value for dangerous voltage. The cautious person should regard any voltage above 30 volts as threatening, not relying on normal body resistance for protection against shock. That being said, it is still an excellent idea to keep one’s hands clean and dry, and remove all metal jewelry when working around electricity. Even around lower voltages, metal jewelry can present a hazard by conducting enough current to burn the skin if brought into contact between two points in a circuit. Metal rings, especially, have been the cause of more than a few burnt fingers by bridging between points in a low-voltage, high-current circuit. Also, voltages lower than 30 can be dangerous if they are enough to induce an unpleasant sensation, which may cause you to jerk and accidentally come into contact with a higher voltage or some other hazard. I recall once working on an automobile on a hot summer day. I was wearing shorts, my bare leg contacting the chrome bumper of the vehicle as I tightened battery connections. When I touched my metal wrench to the positive (ungrounded) side of the 12-volt battery, I could feel a tingling sensation at the point where my leg was touching the bumper. The combination of firm contact with metal and my sweaty skin made it possible to feel a shock with only 12 volts of electrical potential. Thankfully, nothing bad happened, but had the engine been running and the shock felt at my hand instead of my leg, I might have reflexively jerked my arm into the path of the rotating fan, or dropped the metal wrench across the battery terminals (producing large amounts of current through the wrench with lots of accompanying sparks). This illustrates another important lesson regarding electrical safety; that electric current itself may be an indirect cause of injury by causing you to jump or spasm parts of your body into harm’s way. The path current takes through the human body makes a difference as to how harmful it is. Current will affect whatever muscles are in its path, and since the heart and lung (diaphragm) muscles are probably the most critical to one’s survival, shock paths traversing the chest are the most dangerous. This makes the hand-to-hand shock current path a very likely mode of injury and fatality. To guard against such an occurrence, it is advisable to only use one hand to work on live circuits of hazardous voltage, keeping the other hand tucked into a pocket so as to not accidentally touch anything. Of course, it is always safer to work on a circuit when it is unpowered, but this is not always practical or possible. For one-handed work, the right hand is generally preferred over the left for two reasons: most people are right-handed (thus granting additional coordination when working), and the heart is usually situated to the left of center in the chest cavity. For those who are left-handed, this advice may not be the best. If such a person is sufficiently uncoordinated with their right hand, they may be placing themselves in greater danger by using the hand they’re least comfortable with, even if shock current through that hand might present more of a hazard to their heart. The relative hazard between shock through one hand or the other is probably less than the hazard of working with less than optimal coordination, so the choice of which hand to work with is best left to the individual. The best protection against shock from a live circuit is resistance, and resistance can be added to the body through the use of insulated tools, gloves, boots, and other gear. Current in a circuit is a function of available voltage divided by the total resistance in the path of the flow. As we will investigate in greater detail later in this book, resistances have an additive effect when they’re stacked up so that there’s only one path for electrons to flow: Now we’ll see an equivalent circuit for a person wearing insulated gloves and boots: Because electric current must pass through the boot and the body and the glove to complete its circuit back to the battery, the combined total (sum) of these resistances opposes the flow of electrons to a greater degree than any of the resistances considered individually. Safety is one of the reasons electrical wires are usually covered with plastic or rubber insulation: to vastly increase the amount of resistance between the conductor and whoever or whatever might contact it. Unfortunately, it would be prohibitively expensive to enclose power line conductors in sufficient insulation to provide safety in case of accidental contact, so safety is maintained by keeping those lines far enough out of reach so that no one can accidentally touch them. Review • Harm to the body is a function of the amount of shock current. Higher voltage allows for the production of higher, more dangerous currents. Resistance opposes current, making high resistance a good protective measure against shock. • Any voltage above 30 is generally considered to be capable of delivering dangerous shock currents. • Metal jewelry is definitely bad to wear when working around electric circuits. Rings, watchbands, necklaces, bracelets, and other such adornments provide excellent electrical contact with your body, and can conduct current themselves enough to produce skin burns, even with low voltages. • Low voltages can still be dangerous even if they’re too low to directly cause shock injury. They may be enough to startle the victim, causing them to jerk back and contact something more dangerous in the near vicinity. • When necessary to work on a “live” circuit, it is best to perform the work with one hand so as to prevent a deadly hand-to-hand (through the chest) shock current path.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/03%3A_Electrical_Safety/3.04%3A_Ohms_Law_%28again%29.txt
Securing something in a Zero Energy State means ridding it of any sort of potential or stored energy, including but not limited to: • Dangerous voltage • Spring pressure • Hydraulic (liquid) pressure • Pneumatic (air) pressure • Suspended weight • Chemical energy (flammable or otherwise reactive substances) • Nuclear energy (radioactive or fissile substances) Voltage by its very nature is a manifestation of potential energy. In the first chapter I even used elevated liquid as an analogy for the potential energy of voltage, having the capacity (potential) to produce current (flow), but not necessarily realizing that potential until a suitable path for flow has been established, and resistance to flow is overcome. A pair of wires with high voltage between them do not look or sound dangerous even though they harbor enough potential energy between them to push deadly amounts of current through your body. Even though that voltage isn’t presently doing anything, it has the potential to, and that potential must be neutralized before it is safe to physically contact those wires. All properly designed circuits have “disconnect” switch mechanisms for securing voltage from a circuit. Sometimes these “disconnects” serve a dual purpose of automatically opening under excessive current conditions, in which case we call them “circuit breakers.” Other times, the disconnecting switches are strictly manually-operated devices with no automatic function. In either case, they are there for your protection and must be used properly. Please note that the disconnect device should be separate from the regular switch used to turn the device on and off. It is a safety switch, to be used only for securing the system in a Zero Energy State: With the disconnect switch in the “open” position as shown (no continuity), the circuit is broken and no current will exist. There will be zero voltage across the load, and the full voltage of the source will be dropped across the open contacts of the disconnect switch. Note how there is no need for a disconnect switch in the lower conductor of the circuit. Because that side of the circuit is firmly connected to the earth (ground), it is electrically common with the earth and is best left that way. For maximum safety of personnel working on the load of this circuit, a temporary ground connection could be established on the top side of the load, to ensure that no voltage could ever be dropped across the load: With the temporary ground connection in place, both sides of the load wiring are connected to ground, securing a Zero Energy State at the load. Since a ground connection made on both sides of the load is electrically equivalent to short-circuiting across the load with a wire, that is another way of accomplishing the same goal of maximum safety: Either way, both sides of the load will be electrically common to the earth, allowing for no voltage (potential energy) between either side of the load and the ground people stand on. This technique of temporarily grounding conductors in a de-energized power system is very common in maintenance work performed on high voltage power distribution systems. A further benefit of this precaution is protection against the possibility of the disconnect switch being closed (turned “on” so that circuit continuity is established) while people are still contacting the load. The temporary wire connected across the load would create a short-circuit when the disconnect switch was closed, immediately tripping any overcurrent protection devices (circuit breakers or fuses) in the circuit, which would shut the power off again. Damage may very well be sustained by the disconnect switch if this were to happen, but the workers at the load are kept safe. It would be good to mention at this point that overcurrent devices are not intended to provide protection against electric shock. Rather, they exist solely to protect conductors from overheating due to excessive currents. The temporary shorting wires just described would indeed cause any overcurrent devices in the circuit to “trip” if the disconnect switch were to be closed, but realize that electric shock protection is not the intended function of those devices. Their primary function would merely be leveraged for the purpose of worker protection with the shorting wire in place. Since it is obviously important to be able to secure any disconnecting devices in the open (off) position and make sure they stay that way while work is being done on the circuit, there is a need for a structured safety system to be put into place. Such a system is commonly used in industry and it is called Lock-out/Tag-out. A lock-out/tag-out procedure works like this: all individuals working on a secured circuit have their own personal padlock which they set on the control lever of a disconnect device prior to working on the system. Additionally, they must fill out and sign a tag which they hang from their lock describing the nature and duration of the work they intend to perform on the system. If there are multiple sources of energy to be “locked out” (multiple disconnects, both electrical and mechanical energy sources to be secured, etc.), the worker must use as many of his or her locks as necessary to secure power from the system before work begins. This way, the system is maintained in a Zero Energy State until every last lock is removed from all the disconnect and shutoff devices, and that means every last worker gives consent by removing their own personal locks. If the decision is made to re-energize the system and one person’s lock(s) still remain in place after everyone present removes theirs, the tag(s) will show who that person is and what it is they’re doing. Even with a good lock-out/tag-out safety program in place, there is still a need for diligence and common-sense precaution. This is especially true in industrial settings where a multitude of people may be working on a device or system at once. Some of those people might not know about proper lock-out/tag-out procedure, or might know about it but are too complacent to follow it. Don’t assume that everyone has followed the safety rules! After an electrical system has been locked out and tagged with your own personal lock, you must then double-check to see if the voltage really has been secured in a zero state. One way to check is to see if the machine (or whatever it is that’s being worked on) will start up if the Start switch or button is actuated. If it starts, then you know you haven’t successfully secured the electrical power from it. Additionally, you should always check for the presence of dangerous voltage with a measuring device before actually touching any conductors in the circuit. To be safest, you should follow this procedure of checking, using, and then checking your meter: • Check to see that your meter indicates properly on a known source of voltage. • Use your meter to test the locked-out circuit for any dangerous voltage. • Check your meter once more on a known source of voltage to see that it still indicates as it should. While this may seem excessive or even paranoid, it is a proven technique for preventing electrical shock. I once had a meter fail to indicate voltage when it should have while checking a circuit to see if it was “dead.” Had I not used other means to check for the presence of voltage, I might not be alive today to write this. There’s always the chance that your voltage meter will be defective just when you need it to check for a dangerous condition. Following these steps will help ensure that you’re never misled into a deadly situation by a broken meter. Finally, the electrical worker will arrive at a point in the safety check procedure where it is deemed safe to actually touch the conductor(s). Bear in mind that after all of the precautionary steps have been taken, it is still possible (although very unlikely) that a dangerous voltage may be present. One final precautionary measure to take at this point is to make momentary contact with the conductor(s) with the back of the hand before grasping it or a metal tool in contact with it. Why? If, for some reason, there is still voltage present between that conductor and earth ground, finger motion from the shock reaction (clenching into a fist) will break contact with the conductor. Please note that this is absolutely the last step that any electrical worker should ever take before beginning work on a power system, and should never be used as an alternative method of checking for dangerous voltage. If you ever have reason to doubt the trustworthiness of your meter, use another meter to obtain a “second opinion.” Review • Zero Energy State: When a circuit, device, or system has been secured so that no potential energy exists to harm someone working on it. • Disconnect switch devices must be present in a properly designed electrical system to allow for convenient readiness of a Zero Energy State. • Temporary grounding or shorting wires may be connected to a load being serviced for extra protection to personnel working on that load. • Lock-out/Tag-out works like this: when working on a system in a Zero Energy State, the worker places a personal padlock on every energy disconnect device relevant to his or her task on that system. Also, a tag is hung on every one of those locks describing the nature and duration of the work to be done, and who is doing it. • Always verify that a circuit has been secured in a Zero Energy State with test equipment after “locking it out.” Be sure to test your meter before and after checking the circuit to verify that it is working properly. • When the time comes to actually make contact with the conductor(s) of a supposedly dead power system, do so first with the back of one hand, so that if a shock should occur, the muscle reaction will pull the fingers away from the conductor.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/03%3A_Electrical_Safety/3.05%3A_Safe_Practices.txt
If you see someone lying unconscious or “froze on the circuit,” the very first thing to do is shut off the power by opening the appropriate disconnect switch or circuit breaker. If someone touches another person being shocked, there may be enough voltage dropped across the body of the victim to shock the would-be rescuer, thereby “freezing” two people instead of one. Don’t be a hero. Electrons don’t respect heroism. Make sure the situation is safe for you to step into, or else you will be the next victim, and nobody will benefit from your efforts. One problem with this rule is that the source of power may not be known, or easily found in time to save the victim of shock. If a shock victim’s breathing and heartbeat are paralyzed by electric current, their survival time is very limited. If the shock current is of sufficient magnitude, their flesh and internal organs may be quickly roasted by the power the current dissipates as it runs through their body. If the power disconnect switch cannot be located quickly enough, it may be possible to dislodge the victim from the circuit they’re frozen on to by prying them or hitting them away with a dry wooden board or piece of nonmetallic conduit, common items to be found in industrial construction scenes. Another item that could be used to safely drag a “frozen” victim away from contact with power is an extension cord. By looping a cord around their torso and using it as a rope to pull them away from the circuit, their grip on the conductor(s)may be broken. Bear in mind that the victim will be holding on to the conductor with all their strength, so pulling them away probably won’t be easy! Once the victim has been safely disconnected from the source of electric power, the immediate medical concerns for the victim should be respiration and circulation (breathing and pulse). If the rescuer is trained in CPR, they should follow the appropriate steps of checking for breathing and pulse, then applying CPR as necessary to keep the victim’s body from deoxygenating. The cardinal rule of CPR is to keep going until you have been relieved by qualified personnel. If the victim is conscious, it is best to have them lie still until qualified emergency response personnel arrive on the scene. There is the possibility of the victim going into a state of physiological shock—a condition of insufficient blood circulation different from electrical shock—and so they should be kept as warm and comfortable as possible. An electrical shock insufficient to cause immediate interruption of the heartbeat may be strong enough to cause heart irregularities or a heart attack up to several hours later, so the victim should pay close attention to their own condition after the incident, ideally under supervision. Review • A person being shocked needs to be disconnected from the source of electrical power. Locate the disconnecting switch/breaker and turn it off. Alternatively, if the disconnecting device cannot be located, the victim can be pried or pulled from the circuit by an insulated object such as a dry wood board, piece of nonmetallic conduit, or rubber electrical cord. • Victims need immediate medical response: check for breathing and pulse, then apply CPR as necessary to maintain oxygenation. • If a victim is still conscious after having been shocked, they need to be closely monitored and cared for until trained emergency response personnel arrive. There is danger of physiological shock, so keep the victim warm and comfortable. • Shock victims may suffer heart trouble up to several hours after being shocked. The danger of electric shock does not end after the immediate medical attention.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/03%3A_Electrical_Safety/3.06%3A_Emergency_Response.txt
The easiest way to decrease skin resistance is to get it wet. Therefore, touching electrical devices with wet hands, wet feet, or especially in a sweaty condition (salt water is a much better conductor of electricity than fresh water) is dangerous. In the household, the bathroom is one of the more likely places where wet people may contact electrical appliances, and so shock hazard is a definite threat there. Good bathroom design will locate power receptacles away from bathtubs, showers, and sinks to discourage the use of appliances nearby. Telephones that plug into a wall socket are also sources of hazardous voltage (the open circuit voltage is 48 volts DC, and the ringing signal is 150 volts AC—remember that any voltage over 30 is considered potentially dangerous!). Appliances such as telephones and radios should never, ever be used while sitting in a bathtub. Even battery-powered devices should be avoided. Some battery-operated devices employ voltage-increasing circuitry capable of generating lethal potentials. Swimming pools are another source of trouble, since people often operate radios and other powered appliances nearby. The National Electrical Code requires that special shock-detecting receptacles called Ground-Fault Current Interrupting (GFI or GFCI) be installed in wet and outdoor areas to help prevent shock incidents. More on these devices in a later section of this chapter. These special devices have no doubt saved many lives, but they can be no substitute for common sense and diligent precaution. As with firearms, the best “safety” is an informed and conscientious operator. Extension cords, so commonly used at home and in industry, are also sources of potential hazard. All cords should be regularly inspected for abrasion or cracking of insulation, and repaired immediately. One sure method of removing a damaged cord from service is to unplug it from the receptacle, then cut off that plug (the “male” plug) with a pair of side-cutting pliers to ensure that no one can use it until it is fixed. This is important on jobsites, where many people share the same equipment, and not all people there may be aware of the hazards. Any power tool showing evidence of electrical problems should be immediately serviced as well. I’ve heard several horror stories of people who continue to work with hand tools that periodically shock them. Remember, electricity can kill, and the death it brings can be gruesome. Like extension cords, a bad power tool can be removed from service by unplugging it and cutting off the plug at the end of the cord. Downed power lines are an obvious source of electric shock hazard and should be avoided at all costs. The voltages present between power lines or between a power line and earth ground are typically very high (2400 volts being one of the lowest voltages used in residential distribution systems). If a power line is broken and the metal conductor falls to the ground, the immediate result will usually be a tremendous amount of arcing (sparks produced), often enough to dislodge chunks of concrete or asphalt from the road surface, and reports rivaling that of a rifle or shotgun. To come into direct contact with a downed power line is almost sure to cause death, but other hazards exist which are not so obvious. When a line touches the ground, current travels between that downed conductor and the nearest grounding point in the system, thus establishing a circuit: The earth, being a conductor (if only a poor one), will conduct current between the downed line and the nearest system ground point, which will be some kind of conductor buried in the ground for good contact. Being that the earth is a much poorer conductor of electricity than the metal cables strung along the power poles, there will be substantial voltage dropped between the point of cable contact with the ground and the grounding conductor, and little voltage dropped along the length of the cabling (the following figures are very approximate): If the distance between the two ground contact points (the downed cable and the system ground) is small, there will be substantial voltage dropped along short distances between the two points. Therefore, a person standing on the ground between those two points will be in danger of receiving an electric shock by intercepting a voltage between their two feet! Again, these voltage figures are very approximate, but they serve to illustrate a potential hazard: that a person can become a victim of electric shock from a downed power line without even coming into contact with that line! One practical precaution a person could take if they see a power line falling towards the ground is to only contact the ground at one point, either by running away (when you run, only one foot contacts the ground at any given time), or if there’s nowhere to run, by standing on one foot. Obviously, if there’s somewhere safer to run, running is the best option. By eliminating two points of contact with the ground, there will be no chance of applying deadly voltage across the body through both legs. Review • Wet conditions increase risk of electric shock by lowering skin resistance. • Immediately replace worn or damaged extension cords and power tools. You can prevent innocent use of a bad cord or tool by cutting the male plug off the cord (while its unplugged from the receptacle, of course). • Power lines are very dangerous and should be avoided at all costs. If you see a line about to hit the ground, stand on one foot or run (only one foot contacting the ground) to prevent shock from voltage dropped across the ground between the line and the system ground point.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/03%3A_Electrical_Safety/3.07%3A_Common_Sources_of_Hazard.txt
As far as the voltage source and load are concerned, grounding makes no difference at all. It exists purely for the sake of personnel safety, by guaranteeing that at least one point in the circuit will be safe to touch (zero voltage to ground). The “Hot” side of the circuit, named for its potential for shock hazard, will be dangerous to touch unless voltage is secured by proper disconnection from the source (ideally, using a systematic lock-out/tag-out procedure). This imbalance of hazard between the two conductors in a simple power circuit is important to understand. The following series of illustrations are based on common household wiring systems (using DC voltage sources rather than AC for simplicity). If we take a look at a simple, household electrical appliance such as a toaster with a conductive metal case, we can see that there should be no shock hazard when it is operating properly. The wires conducting power to the toaster’s heating element are insulated from touching the metal case (and each other) by rubber or plastic. However, if one of the wires inside the toaster were to accidentally come in contact with the metal case, the case will be made electrically common to the wire, and touching the case will be just as hazardous as touching the wire bare. Whether or not this presents a shock hazard depends on which wire accidentally touches: If the “hot” wire contacts the case, it places the user of the toaster in danger. On the other hand, if the neutral wire contacts the case, there is no danger of shock: To help ensure that the former failure is less likely than the latter, engineers try to design appliances in such a way as to minimize hot conductor contact with the case. Ideally, of course, you don’t want either wire accidentally coming in contact with the conductive case of the appliance, but there are usually ways to design the layout of the parts to make accidental contact less likely for one wire than for the other. However, this preventative measure is effective only if power plug polarity can be guaranteed. If the plug can be reversed, then the conductor more likely to contact the case might very well be the “hot” one: Appliances designed this way usually come with “polarized” plugs, one prong of the plug being slightly narrower than the other. Power receptacles are also designed like this, one slot being narrower than the other. Consequently, the plug cannot be inserted “backwards,” and conductor identity inside the appliance can be guaranteed. Remember that this has no effect whatsoever on the basic function of the appliance: its strictly for the sake of user safety. Some engineers address the safety issue simply by making the outside case of the appliance nonconductive. Such appliances are called double-insulated, since the insulating case serves as a second layer of insulation above and beyond that of the conductors themselves. If a wire inside the appliance accidently comes in contact with the case, there is no danger presented to the user of the appliance. Other engineers tackle the problem of safety by maintaining a conductive case, but using a third conductor to firmly connect that case to ground: The third prong on the power cord provides a direct electrical connection from the appliance case to earth ground, making the two points electrically common with each other. If they’re electrically common, then there cannot be any voltage dropped between them. At least, that’s how it is supposed to work. If the hot conductor accidently touches the metal appliance case, it will create a direct short-circuit back to the voltage source through the ground wire, tripping any overcurrent protection devices. The user of the appliance will remain safe. This is why its so important never to cut the third prong off a power plug when trying to fit it into a two-prong receptacle. If this is done, there will be no grounding of the appliance case to keep the user(s) safe. The appliance will still function properly, but if there is an internal fault bringing the hot wire in contact with the case, the results can be deadly. If a two-prong receptacle must be used, a two- to three-prong receptacle adapter can be installed with a grounding wire attached to the receptacle’s grounded cover screw. This will maintain the safety of the grounded appliance while plugged in to this type of receptacle. Electrically safe engineering doesn’t necessarily end at the load, however. A final safeguard against electrical shock can be arranged on the power supply side of the circuit rather than the appliance itself. This safeguard is called ground-fault detection, and it works like this: In a properly functioning appliance (shown above), the current measured through the hot conductor should be exactly equal to the current through the neutral conductor, because there’s only one path for electrons to flow in the circuit. With no fault inside the appliance, there is no connection between circuit conductors and the person touching the case, and therefore no shock. If, however, the hot wire accidentally contacts the metal case, there will be current through the person touching the case. The presence of a shock current will be manifested as a difference of current between the two power conductors at the receptacle: This difference in current between the “hot” and “neutral” conductors will only exist if there is current through the ground connection, meaning that there is a fault in the system. Therefore, such a current difference can be used as a way to detect a fault condition. If a device is set up to measure this difference of current between the two power conductors, a detection of current imbalance can be used to trigger the opening of a disconnect switch, thus cutting power off and preventing serious shock: Such devices are called Ground Fault Current Interruptors, or GFCIs for short. Outside North America, the GFCI is variously known as a safety switch, a residual current device (RCD), an RCBO or RCD/MCB if combined with a miniature circuit breaker, or earth leakage circuit breaker (ELCB). They are compact enough to be built into a power receptacle. These receptacles are easily identified by their distinctive “Test” and “Reset” buttons. The big advantage with using this approach to ensure safety is that it works regardless of the appliance’s design. Of course, using a double-insulated or grounded appliance in addition to a GFCI receptacle would be better yet, but its comforting to know that something can be done to improve safety above and beyond the design and condition of the appliance. The arc fault circuit interrupter (AFCI), a circuit breaker designed to prevent fires, is designed to open on intermittent resistive short circuits. For example, a normal 15 A breaker is designed to open circuit quickly if loaded well beyond the 15 A rating, more slowly a little beyond the rating. While this protects against direct shorts and several seconds of overload, respectively, it does not protect against arcs– similar to arc-welding. An arc is a highly variable load, repetitively peaking at over 70 A, open circuiting with alternating current zero-crossings. Though, the average current is not enough to trip a standard breaker, it is enough to start a fire. This arc could be created by a metalic short circuit which burns the metal open, leaving a resistive sputtering plasma of ionized gases. The AFCI contains electronic circuitry to sense this intermittent resistive short circuit. It protects against both hot to neutral and hot to ground arcs. The AFCI does not protect against personal shock hazards like a GFCI does. Thus, GFCIs still need to be installed in kitchen, bath, and outdoors circuits. Since the AFCI often trips upon starting large motors, and more generally on brushed motors, its installation is limited to bedroom circuits by the U.S. National Electrical code. Use of the AFCI should reduce the number of electrical fires. However, nuisance-trips when running appliances with motors on AFCI circuits is a problem. Review • Power systems often have one side of the voltage supply connected to earth ground to ensure safety at that point. • The “grounded” conductor in a power system is called the neutral conductor, while the ungrounded conductor is called the hot. • Grounding in power systems exists for the sake of personnel safety, not the operation of the load(s). • Electrical safety of an appliance or other load can be improved by good engineering: polarized plugs, double insulation, and three-prong “grounding” plugs are all ways that safety can be maximized on the load side. • Ground Fault Current Interruptors (GFCIs) work by sensing a difference in current between the two conductors supplying power to the load. There should be no difference in current at all. Any difference means that current must be entering or exiting the load by some means other than the two main conductors, which is not good. A significant current difference will automatically open a disconnecting switch mechanism, cutting power off completely.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/03%3A_Electrical_Safety/3.08%3A_Safe_Circuit_Design.txt
The most common piece of electrical test equipment is a meter called the multimeter. Multimeters are so named because they have the ability to measure a multiple of variables: voltage, current, resistance, and often many others, some of which cannot be explained here due to their complexity. In the hands of a trained technician, the multimeter is both an efficient work tool and a safety device. In the hands of someone ignorant and/or careless, however, the multimeter may become a source of danger when connected to a “live” circuit. There are many different brands of multimeters, with multiple models made by each manufacturer sporting different sets of features. The multimeter shown here in the following illustrations is a “generic” design, not specific to any manufacturer, but general enough to teach the basic principles of use: You will notice that the display of this meter is of the “digital” type: showing numerical values using four digits in a manner similar to a digital clock. The rotary selector switch (now set in the Off position) has five different measurement positions it can be set in: two “V” settings, two “A” settings, and one setting in the middle with a funny-looking “horseshoe” symbol on it representing “resistance.” The “horseshoe” symbol is the Greek letter “Omega” (Ω), which is the common symbol for the electrical unit of ohms. Of the two “V” settings and two “A” settings, you will notice that each pair is divided into unique markers with either a pair of horizontal lines (one solid, one dashed), or a dashed line with a squiggly curve over it. The parallel lines represent “DC” while the squiggly curve represents “AC.” The “V” of course stands for “voltage” while the “A” stands for “amperage” (current). The meter uses different techniques, internally, to measure DC than it uses to measure AC, and so it requires the user to select which type of voltage (V) or current (A) is to be measured. Although we haven’t discussed alternating current (AC) in any technical detail, this distinction in meter settings is an important one to bear in mind. There are three different sockets on the multimeter face into which we can plug our test leads. Test leads are nothing more than specially-prepared wires used to connect the meter to the circuit under test. The wires are coated in a color-coded (either black or red) flexible insulation to prevent the user’s hands from contacting the bare conductors, and the tips of the probes are sharp, stiff pieces of wire: The black test lead always plugs into the black socket on the multimeter: the one marked “COM” for “common.” The red test lead plugs into either the red socket marked for voltage and resistance, or the red socket marked for current, depending on which quantity you intend to measure with the multimeter. To see how this works, let’s look at a couple of examples showing the meter in use. First, we’ll set up the meter to measure DC voltage from a battery: Note that the two test leads are plugged into the appropriate sockets on the meter for voltage, and the selector switch has been set for DC “V”. Now, we’ll take a look at an example of using the multimeter to measure AC voltage from a household electrical power receptacle (wall socket): The only difference in the setup of the meter is the placement of the selector switch: it is now turned to AC “V”. Since we’re still measuring voltage, the test leads will remain plugged in the same sockets. In both of these examples, it is imperative that you not let the probe tips come in contact with one another while they are both in contact with their respective points on the circuit. If this happens, a short-circuit will be formed, creating a spark and perhaps even a ball of flame if the voltage source is capable of supplying enough current! The following image illustrates the potential for hazard: This is just one of the ways that a meter can become a source of hazard if used improperly. Voltage measurement is perhaps the most common function a multimeter is used for. It is certainly the primary measurement taken for safety purposes (part of the lock-out/tag-out procedure), and it should be well understood by the operator of the meter. Being that voltage is always relative between two points, the meter must be firmly connected to two points in a circuit before it will provide a reliable measurement. That usually means both probes must be grasped by the user’s hands and held against the proper contact points of a voltage source or circuit while measuring. Because a hand-to-hand shock current path is the most dangerous, holding the meter probes on two points in a high-voltage circuit in this manner is always a potential hazard. If the protective insulation on the probes is worn or cracked, it is possible for the user’s fingers to come into contact with the probe conductors during the time of test, causing a bad shock to occur. If it is possible to use only one hand to grasp the probes, that is a safer option. Sometimes it is possible to “latch” one probe tip onto the circuit test point so that it can be let go of and the other probe set in place, using only one hand. Special probe tip accessories such as spring clips can be attached to help facilitate this. Remember that meter test leads are part of the whole equipment package, and that they should be treated with the same care and respect that the meter itself is. If you need a special accessory for your test leads, such as a spring clip or other special probe tip, consult the product catalog of the meter manufacturer or other test equipment manufacturer. Do not try to be creative and make your own test probes, as you may end up placing yourself in danger the next time you use them on a live circuit. Also, it must be remembered that digital multimeters usually do a good job of discriminating between AC and DC measurements, as they are set for one or the other when checking for voltage or current. As we have seen earlier, both AC and DC voltages and currents can be deadly, so when using a multimeter as a safety check device you should always check for the presence of both AC and DC, even if you’re not expecting to find both! Also, when checking for the presence of hazardous voltage, you should be sure to check all pairs of points in question. For example, suppose that you opened up an electrical wiring cabinet to find three large conductors supplying AC power to a load. The circuit breaker feeding these wires (supposedly) has been shut off, locked, and tagged. You double-checked the absence of power by pressing the Start button for the load. Nothing happened, so now you move on to the third phase of your safety check: the meter test for voltage. First, you check your meter on a known source of voltage to see that its working properly. Any nearby power receptacle should provide a convenient source of AC voltage for a test. You do so and find that the meter indicates as it should. Next, you need to check for voltage among these three wires in the cabinet. But voltage is measured between two points, so where do you check? The answer is to check between all combinations of those three points. As you can see, the points are labeled “A”, “B”, and “C” in the illustration, so you would need to take your multimeter (set in the voltmeter mode) and check between points A & B, B & C, and A & C. If you find voltage between any of those pairs, the circuit is not in a Zero Energy State. But wait! Remember that a multimeter will not register DC voltage when its in the AC voltage mode and vice versa, so you need to check those three pairs of points in each mode for a total of six voltage checks in order to be complete! However, even with all that checking, we still haven’t covered all possibilities yet. Remember that hazardous voltage can appear between a single wire and ground (in this case, the metal frame of the cabinet would be a good ground reference point) in a power system. So, to be perfectly safe, we not only have to check between A & B, B & C, and A & C (in both AC and DC modes), but we also have to check between A & ground, B & ground, and C & ground (in both AC and DC modes)! This makes for a grand total of twelve voltage checks for this seemingly simple scenario of only three wires. Then, of course, after we’ve completed all these checks, we need to take our multimeter and re-test it against a known source of voltage such as a power receptacle to ensure that its still in good working order. Using a multimeter to check for resistance is a much simpler task. The test leads will be kept plugged in the same sockets as for the voltage checks, but the selector switch will need to be turned until it points to the “horseshoe” resistance symbol. Touching the probes across the device whose resistance is to be measured, the meter should properly display the resistance in ohms: One very important thing to remember about measuring resistance is that it must only be done on de-energized components! When the meter is in “resistance” mode, it uses a small internal battery to generate a tiny current through the component to be measured. By sensing how difficult it is to move this current through the component, the resistance of that component can be determined and displayed. If there is any additional source of voltage in the meter-lead-component-lead-meter loop to either aid or oppose the resistance-measuring current produced by the meter, faulty readings will result. In a worse-case situation, the meter may even be damaged by the external voltage. The “resistance” mode of a multimeter is very useful in determining wire continuity as well as making precise measurements of resistance. When there is a good, solid connection between the probe tips (simulated by touching them together), the meter shows almost zero Ω. If the test leads had no resistance in them, it would read exactly zero: If the leads are not in contact with each other, or touching opposite ends of a broken wire, the meter will indicate infinite resistance (usually by displaying dashed lines or the abbreviation “O.L.” which stands for “open loop”): By far the most hazardous and complex application of the multimeter is in the measurement of current. The reason for this is quite simple: in order for the meter to measure current, the current to be measured must be forced to go through the meter. This means that the meter must be made part of the current path of the circuit rather than just be connected off to the side somewhere as is the case when measuring voltage. In order to make the meter part of the current path of the circuit, the original circuit must be “broken” and the meter connected across the two points of the open break. To set the meter up for this, the selector switch must point to either AC or DC “A” and the red test lead must be plugged in the red socket marked “A”. The following illustration shows a meter all ready to measure current and a circuit to be tested: Now, the circuit is broken in preparation for the meter to be connected: The next step is to insert the meter in-line with the circuit by connecting the two probe tips to the broken ends of the circuit, the black probe to the negative (-) terminal of the 9-volt battery and the red probe to the loose wire end leading to the lamp: This example shows a very safe circuit to work with. 9 volts hardly constitutes a shock hazard, and so there is little to fear in breaking this circuit open (bare handed, no less!) and connecting the meter in-line with the flow of electrons. However, with higher power circuits, this could be a hazardous endeavor indeed. Even if the circuit voltage was low, the normal current could be high enough that an injurious spark would result the moment the last meter probe connection was established. Another potential hazard of using a multimeter in its current-measuring (“ammeter”) mode is failure to properly put it back into a voltage-measuring configuration before measuring voltage with it. The reasons for this are specific to ammeter design and operation. When measuring circuit current by placing the meter directly in the path of current, it is best to have the meter offer little or no resistance against the flow of electrons. Otherwise, any additional resistance offered by the meter would impede the electron flow and alter the circuits operation. Thus, the multimeter is designed to have practically zero ohms of resistance between the test probe tips when the red probe has been plugged into the red “A” (current-measuring) socket. In the voltage-measuring mode (red lead plugged into the red “V” socket), there are many mega-ohms of resistance between the test probe tips, because voltmeters are designed to have close to infinite resistance (so that they don’t draw any appreciable current from the circuit under test). When switching a multimeter from current- to voltage-measuring mode, its easy to spin the selector switch from the “A” to the “V” position and forget to correspondingly switch the position of the red test lead plug from “A” to “V”. The result—if the meter is then connected across a source of substantial voltage—will be a short-circuit through the meter! To help prevent this, most multimeters have a warning feature by which they beep if ever there’s a lead plugged in the “A” socket and the selector switch is set to “V”. As convenient as features like these are, though, they are still no substitute for clear thinking and caution when using a multimeter. All good-quality multimeters contain fuses inside that are engineered to “blow” in the event of excessive current through them, such as in the case illustrated in the last image. Like all overcurrent protection devices, these fuses are primarily designed to protect the equipment (in this case, the meter itself) from excessive damage, and only secondarily to protect the user from harm. A multimeter can be used to check its own current fuse by setting the selector switch to the resistance position and creating a connection between the two red sockets like this: A good fuse will indicate very little resistance while a blown fuse will always show “O.L.” (or whatever indication that model of multimeter uses to indicate no continuity). The actual number of ohms displayed for a good fuse is of little consequence, so long as its an arbitrarily low figure. So now that we’ve seen how to use a multimeter to measure voltage, resistance, and current, what more is there to know? Plenty! The value and capabilities of this versatile test instrument will become more evident as you gain skill and familiarity using it. There is no substitute for regular practice with complex instruments such as these, so feel free to experiment on safe, battery-powered circuits. Review • A meter capable of checking for voltage, current, and resistance is called a multimeter. • As voltage is always relative between two points, a voltage-measuring meter (“voltmeter”) must be connected to two points in a circuit in order to obtain a good reading. Be careful not to touch the bare probe tips together while measuring voltage, as this will create a short-circuit! • Remember to always check for both AC and DC voltage when using a multimeter to check for the presence of hazardous voltage on a circuit. Make sure you check for voltage between all pair-combinations of conductors, including between the individual conductors and ground! • When in the voltage-measuring (“voltmeter”) mode, multimeters have very high resistance between their leads. • Never try to read resistance or continuity with a multimeter on a circuit that is energized. At best, the resistance readings you obtain from the meter will be inaccurate, and at worst the meter may be damaged and you may be injured. • Current measuring meters (“ammeters”) are always connected in a circuit so the electrons have to flow through the meter. • When in the current-measuring (“ammeter”) mode, multimeters have practically no resistance between their leads. This is intended to allow electrons to flow through the meter with the least possible difficulty. If this were not the case, the meter would add extra resistance in the circuit, thereby affecting the current. 3.10: Electric Shock Data The table found in the Bussmann handbook differs slightly from the one available from MIT: for the DC threshold of perception (men), the MIT table gives 5.2 mA while the Bussmann table gives a slightly greater figure of 6.2 mA. Also, for the “unable to let go” 60 Hz AC threshold (men), the MIT table gives 20 mA while the Bussmann table gives a lesser figure of 16 mA. As I have yet to obtain a primary copy of Dalziel’s research, the figures cited here are conservative: I have listed the lowest values in my table where any data sources differ. These differences, of course, are academic. The point here is that relatively small magnitudes of electric current through the body can be harmful if not lethal. Data regarding the electrical resistance of body contact points was taken from a safety page (document 16.1) from the Lawrence Livermore National Laboratory (website [*]), citing Ralph H. Lee as the data source. Lee’s work was listed here in a document entitled “Human Electrical Sheet,” composed while he was an IEEE Fellow at E.I. duPont de Nemours & Co., and also in an article entitled “Electrical Safety in Industrial Plants” found in the June 1971 issue of IEEE Spectrum magazine. For the morbidly curious, Charles Dalziel’s experimentation conducted at the University of California (Berkeley) began with a state grant to investigate the bodily effects of sub-lethal electric current. His testing method was as follows: healthy male and female volunteer subjects were asked to hold a copper wire in one hand and place their other hand on a round, brass plate. A voltage was then applied between the wire and the plate, causing electrons to flow through the subject’s arms and chest. The current was stopped, then resumed at a higher level. The goal here was to see how much current the subject could tolerate and still keep their hand pressed against the brass plate. When this threshold was reached, laboratory assistants forcefully held the subject’s hand in contact with the plate and the current was again increased. The subject was asked to release the wire they were holding, to see at what current level involuntary muscle contraction (tetanus) prevented them from doing so. For each subject the experiment was conducted using DC and also AC at various frequencies. Over two dozen human volunteers were tested, and later studies on heart fibrillation were conducted using animal subjects.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/03%3A_Electrical_Safety/3.09%3A_Safe_Meter_Usage.txt
In many disciplines of science and engineering, very large and very small numerical quantities must be managed. Some of these quantities are mind-boggling in their size, either extremely small or extremely large. Take for example the mass of a proton, one of the constituent particles of an atom’s nucleus: Proton mass = 0.00000000000000000000000167 grams Or, consider the number of electrons passing by a point in a circuit every second with a steady electric current of 1 amp: 1 amp = 6,250,000,000,000,000,000 electrons per second A lot of zeros, isn’t it? Obviously, it can get quite confusing to have to handle so many zero digits in numbers such as this, even with the help of calculators and computers. Take note of those two numbers and of the relative sparsity of non-zero digits in them. For the mass of the proton, all we have is a “167” preceded by 23 zeros before the decimal point. For the number of electrons per second in 1 amp, we have “625” followed by 16 zeros. We call the span of non-zero digits (from first to last), plus any zero digits not merely used for placeholding, the “significant digits” of any number. The significant digits in a real-world measurement are typically reflective of the accuracy of that measurement. For example, if we were to say that a car weighs 3,000 pounds, we probably don’t mean that the car in question weighs exactly 3,000 pounds, but that we’ve rounded its weight to a value more convenient to say and remember. That rounded figure of 3,000 has only one significant digit: the “3” in front—the zeros merely serve as placeholders. However, if we were to say that the car weighed 3,005 pounds, the fact that the weight is not rounded to the nearest thousand pounds tells us that the two zeros in the middle aren’t just placeholders, but that all four digits of the number “3,005” are significant to its representative accuracy. Thus, the number “3,005” is said to have four significant figures. In like manner, numbers with many zero digits are not necessarily representative of a real-world quantity all the way to the decimal point. When this is known to be the case, such a number can be written in a kind of mathematical “shorthand” to make it easier to deal with. This “shorthand” is called scientific notation. With scientific notation, a number is written by representing its significant digits as a quantity between 1 and 10 (or -1 and -10, for negative numbers), and the “placeholder” zeros are accounted for by a power-of-ten multiplier. For example: 1 amp = 6,250,000,000,000,000,000 electrons per second . . . can be expressed as . . . 1 amp = 6.25 x 1018 electrons per second 10 to the 18th power (1018) means 10 multiplied by itself 18 times, or a “1” followed by 18 zeros. Multiplied by 6.25, it looks like “625” followed by 16 zeros (take 6.25 and skip the decimal point 18 places to the right). The advantages of scientific notation are obvious: the number isn’t as unwieldy when written on paper, and the significant digits are plain to identify. But what about very small numbers, like the mass of the proton in grams? We can still use scientific notation, except with a negative power-of-ten instead of a positive one, to shift the decimal point to the left instead of to the right: Proton mass = 0.00000000000000000000000167 grams . . . can be expressed as . . . Proton mass = 1.67 x 10-24 grams 10 to the -24th power (10-24) means the inverse (1/x) of 10 multiplied by itself 24 times, or a “1” preceded by a decimal point and 23 zeros. Multiplied by 1.67, it looks like “167” preceded by a decimal point and 23 zeros. Just as in the case with the very large number, it is a lot easier for a human being to deal with this “shorthand” notation. As with the prior case, the significant digits in this quantity are clearly expressed. Because the significant digits are represented “on their own,” away from the power-of-ten multiplier, it is easy to show a level of precision even when the number looks round. Taking our 3,000 pound car example, we could express the rounded number of 3,000 in scientific notation as such: car weight = 3 x 103 pounds If the car actually weighed 3,005 pounds (accurate to the nearest pound) and we wanted to be able to express that full accuracy of measurement, the scientific notation figure could be written like this: car weight = 3.005 x 103 pounds However, what if the car actually did weigh 3,000 pounds, exactly (to the nearest pound)? If we were to write its weight in “normal” form (3,000 lbs), it wouldn’t necessarily be clear that this number was indeed accurate to the nearest pound and not just rounded to the nearest thousand pounds, or to the nearest hundred pounds, or to the nearest ten pounds. Scientific notation, on the other hand, allows us to show that all four digits are significant with no misunderstanding: car weight = 3.000 x 103 pounds Since there would be no point in adding extra zeros to the right of the decimal point (placeholding zeros being unnecessary with scientific notation), we know those zeros must be significant to the precision of the figure.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/04%3A_Scientific_Notation_And_Metric_Prefixes/4.01%3A_Scientific_Notation.txt
The benefits of scientific notation do not end with ease of writing and expression of accuracy. Such notation also lends itself well to mathematical problems of multiplication and division. Let’s say we wanted to know how many electrons would flow past a point in a circuit carrying 1 amp of electric current in 25 seconds. If we know the number of electrons per second in the circuit (which we do), then all we need to do is multiply that quantity by the number of seconds (25) to arrive at an answer of total electrons: (6,250,000,000,000,000,000 electrons per second) x (25 seconds) = 156,250,000,000,000,000,000 electrons passing by in 25 seconds Using scientific notation, we can write the problem like this: (6.25 x 1018 electrons per second) x (25 seconds) If we take the “6.25” and multiply it by 25, we get 156.25. So, the answer could be written as: 156.25 x 1018 electrons However, if we want to hold to standard convention for scientific notation, we must represent the significant digits as a number between 1 and 10. In this case, we’d say “1.5625” multiplied by some power-of-ten. To obtain 1.5625 from 156.25, we have to skip the decimal point two places to the left. To compensate for this without changing the value of the number, we have to raise our power by two notches (10 to the 20th power instead of 10 to the 18th): 1.5625 x 1020 electrons What if we wanted to see how many electrons would pass by in 3,600 seconds (1 hour)? To make our job easier, we could put the time in scientific notation as well: (6.25 x 1018 electrons per second) x (3.6 x 103 seconds) To multiply, we must take the two significant sets of digits (6.25 and 3.6) and multiply them together; and we need to take the two powers-of-ten and multiply them together. Taking 6.25 times 3.6, we get 22.5. Taking 1018 times 103, we get 1021 (exponents with common base numbers add). So, the answer is: 22.5 x 1021 electrons . . . or more properly . . . 2.25 x 1022 electrons To illustrate how division works with scientific notation, we could figure that last problem “backwards” to find out how long it would take for that many electrons to pass by at a current of 1 amp: (2.25 x 1022 electrons) / (6.25 x 1018 electrons per second) Just as in multiplication, we can handle the significant digits and powers-of-ten in separate steps (remember that you subtract the exponents of divided powers-of-ten): (2.25 / 6.25) x (1022 / 1018) And the answer is: 0.36 x 104, or 3.6 x 103, seconds. You can see that we arrived at the same quantity of time (3600 seconds). Now, you may be wondering what the point of all this is when we have electronic calculators that can handle the math automatically. Well, back in the days of scientists and engineers using “slide rule” analog computers, these techniques were indispensable. The “hard” arithmetic (dealing with the significant digit figures) would be performed with the slide rule while the powers-of-ten could be figured without any help at all, being nothing more than simple addition and subtraction. Review • Significant digits are representative of the real-world accuracy of a number. • Scientific notation is a “shorthand” method to represent very large and very small numbers in easily-handled form. • When multiplying two numbers in scientific notation, you can multiply the two significant digit figures and arrive at a power-of-ten by adding exponents. • When dividing two numbers in scientific notation, you can divide the two significant digit figures and arrive at a power-of-ten by subtracting exponents.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/04%3A_Scientific_Notation_And_Metric_Prefixes/4.02%3A_Arithmetic_with_Scientific_Notation.txt
The metric system, besides being a collection of measurement units for all sorts of physical quantities, is structured around the concept of scientific notation. The primary difference is that the powers-of-ten are represented with alphabetical prefixes instead of by literal powers-of-ten. The following number line shows some of the more common prefixes and their respective powers-of-ten: Looking at this scale, we can see that 2.5 Gigabytes would mean 2.5 x 109 bytes, or 2.5 billion bytes. Likewise, 3.21 picoamps would mean 3.21 x 10-12 amps, or 3.21 1/trillionths of an amp. Other metric prefixes exist to symbolize powers of ten for extremely small and extremely large multipliers. On the extremely small end of the spectrum, femto (f) = 10-15, atto (a) = 10-18, zepto (z) = 10-21, and yocto(y) = 10-24. On the extremely large end of the spectrum, Peta (P) = 1015, Exa (E) = 1018, Zetta (Z) = 1021, and Yotta (Y) = 1024. Because the major prefixes in the metric system refer to powers of 10 that are multiples of 3 (from “kilo” on up, and from “milli” on down), metric notation differs from regular scientific notation in that the mantissa can be anywhere between 1 and 999, depending on which prefix is chosen. For example, if a laboratory sample weighs 0.000267 grams, scientific notation and metric notation would express it differently: 2.67 x 10-4 grams (scientific notation) 267 µgrams (metric notation) The same figure may also be expressed as 0.267 milligrams (0.267 mg), although it is usually more common to see the significant digits represented as a figure greater than 1. In recent years a new style of metric notation for electric quantities has emerged which seeks to avoid the use of the decimal point. Since decimal points (”.”) are easily misread and/or “lost” due to poor print quality, quantities such as 4.7 k may be mistaken for 47 k. The new notation replaces the decimal point with the metric prefix character, so that “4.7 k” is printed instead as “4k7”. Our last figure from the prior example, “0.267 m”, would be expressed in the new notation as “0m267”. Review • The metric system of notation uses alphabetical prefixes to represent certain powers-of-ten instead of the lengthier scientific notation. 4.04: Metric Prefix Conversions To express a quantity in a different metric prefix that what it was originally given, all we need to do is skip the decimal point to the right or to the left as needed. Notice that the metric prefix “number line” in the previous section was laid out from larger to smaller, left to right. This layout was purposely chosen to make it easier to remember which direction you need to skip the decimal point for any given conversion. Example problem: express 0.000023 amps in terms of microamps. 0.000023 amps (has no prefix, just plain unit of amps) From UNITS to micro on the number line is 6 places (powers of ten) to the right, so we need to skip the decimal point 6 places to the right: 0.000023 amps = 23. , or 23 microamps (µA) Example problem: express 304,212 volts in terms of kilovolts. 304,212 volts (has no prefix, just plain unit of volts) From the (none) place to kilo place on the number line is 3 places (powers of ten) to the left, so we need to skip the decimal point 3 places to the left: 304,212. = 304.212 kilovolts (kV) Example problem: express 50.3 Mega-ohms in terms of milli-ohms. 50.3 M ohms (mega = 106) From mega to milli is 9 places (powers of ten) to the right (from 10 to the 6th power to 10 to the -3rd power), so we need to skip the decimal point 9 places to the right: 50.3 M ohms = 50,300,000,000 milli-ohms (mΩ) Review • Follow the metric prefix number line to know which direction you skip the decimal point for conversion purposes. • A number with no decimal point shown has an implicit decimal point to the immediate right of the furthest right digit (i.e. for the number 436 the decimal point is to the right of the 6, as such: 436.) 4.05: Hand Calculator Use Scientific Notation with a Hand Calculator To enter numbers in scientific notation into a hand calculator, there is usually a button marked “E” or “EE” used to enter the correct power of ten. For example, to enter the mass of a proton in grams (1.67 x 10-24grams) into a hand calculator, I would enter the following keystrokes: The [+/-] keystroke changes the sign of the power (24) into a -24. Some calculators allow the use of the subtraction key [-] to do this, but I prefer the “change sign” [+/-] key because its more consistent with the use of that key in other contexts. If I wanted to enter a negative number in scientific notation into a hand calculator, I would have to be careful how I used the [+/-] key, lest I change the sign of the power and not the significant digit value. Pay attention to this example: Number to be entered: -3.221 x 10-15: The first [+/-] keystroke changes the entry from 3.221 to -3.221; the second [+/-] keystroke changes the power from 15 to -15. Metric and Scientific Notation Display Modes Displaying metric and scientific notation on a hand calculator is a different matter. It involves changing the display option from the normal “fixed” decimal point mode to the “scientific” or “engineering” mode. Your calculator manual will tell you how to set each display mode. These display modes tell the calculator how to represent any number on the numerical readout. The actual value of the number is not affected in any way by the choice of display modes—only how the number appears to the calculator user. Likewise, the procedure for entering numbers into the calculator does not change with different display modes either. Powers of ten are usually represented by a pair of digits in the upper-right hand corner of the display, and are visible only in the “scientific” and “engineering” modes. The difference between “scientific” and “engineering” display modes is the difference between scientific and metric notation. In “scientific” mode, the power-of-ten display is set so that the main number on the display is always a value between 1 and 10 (or -1 and -10 for negative numbers). In “engineering” mode, the powers-of-ten are set to display in multiples of 3, to represent the major metric prefixes. All the user has to do is memorize a few prefix/power combinations, and his or her calculator will be “speaking” metric! Review • Use the [EE] key to enter powers of ten. • Use “scientific” or “engineering” to display powers of ten, in scientific or metric notation, respectively.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/04%3A_Scientific_Notation_And_Metric_Prefixes/4.03%3A_Metric_Notation.txt
The SPICE circuit simulation computer program uses scientific notation to display its output information, and can interpret both scientific notation and metric prefixes in the circuit description files. If you are going to be able to successfully interpret the SPICE analyses throughout this book, you must be able to understand the notation used to express variables of voltage, current, etc. in the program. Let’s start with a very simple circuit composed of one voltage source (a battery) and one resistor: To simulate this circuit using SPICE, we first have to designate node numbers for all the distinct points in the circuit, then list the components along with their respective node numbers so the computer knows which component is connected to which, and how. For a circuit of this simplicity, the use of SPICE seems like overkill, but it serves the purpose of demonstrating practical use of scientific notation: Typing out a circuit description file, or netlist, for this circuit, we get this: The line “v1 1 0 dc 24” describes the battery, positioned between nodes 1 and 0, with a DC voltage of 24 volts. The line “r1 1 0 5” describes the 5 Ω resistor placed between nodes 1 and 0. Using a computer to run a SPICE analysis on this circuit description file, we get the following results: SPICE tells us that the voltage “at” node number 1 (actually, this means the voltage between nodes 1 and 0, node 0 being the default reference point for all voltage measurements) is equal to 24 volts. The current through battery “v1” is displayed as -4.800E+00 amps. This is SPICE’s method of denoting scientific notation. What its really saying is “-4.800 x 100 amps,” or simply -4.800 amps. The negative value for current here is due to a quirk in SPICE and does not indicate anything significant about the circuit itself. The “total power dissipation” is given to us as 1.15E+02 watts, which means “1.15 x 102 watts,” or 115 watts. Let’s modify our example circuit so that it has a 5 kΩ (5 kilo-ohm, or 5,000 ohm) resistor instead of a 5 Ω resistor and see what happens. Once again is our circuit description file, or “netlist:” The letter “k” following the number 5 on the resistor’s line tells SPICE that it is a figure of 5 kΩ, not 5 Ω. Let’s see what result we get when we run this through the computer: The battery voltage, of course, hasn’t changed since the first simulation: its still at 24 volts. The circuit current, on the other hand, is much less this time because we’ve made the resistor a larger value, making it more difficult for electrons to flow. SPICE tells us that the current this time is equal to -4.800E-03 amps, or -4.800 x 10-3 amps. This is equivalent to taking the number -4.8 and skipping the decimal point three places to the left. Of course, if we recognize that 10-3 is the same as the metric prefix “milli,” we could write the figure as -4.8 milliamps, or -4.8 mA. Looking at the “total power dissipation” given to us by SPICE on this second simulation, we see that it is 1.15E-01 watts, or 1.15 x 10-1 watts. The power of -1 corresponds to the metric prefix “deci,” but generally we limit our use of metric prefixes in electronics to those associated with powers of ten that are multiples of three (ten to the power of . . . -12, -9, -6, -3, 3, 6, 9, 12, etc.). So, if we want to follow this convention, we must express this power dissipation figure as 0.115 watts or 115 milliwatts (115 mW) rather than 1.15 deciwatts (1.15 dW). Perhaps the easiest way to convert a figure from scientific notation to common metric prefixes is with a scientific calculator set to the “engineering” or “metric” display mode. Just set the calculator for that display mode, type any scientific notation figure into it using the proper keystrokes (see your owner’s manual), press the “equals” or “enter” key, and it should display the same figure in engineering/metric notation. Again, I’ll be using SPICE as a method of demonstrating circuit concepts throughout this book. Consequently, it is in your best interest to understand scientific notation so you can easily comprehend its output data format.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/04%3A_Scientific_Notation_And_Metric_Prefixes/4.06%3A_Scientific_Notation_in_SPICE.txt
Series and Parallel Circuits There are two basic ways in which to connect more than two circuit components: series and parallel. First, an example of a series circuit: Here, we have three resistors (labeled R1, R2, and R3), connected in a long chain from one terminal of the battery to the other. (It should be noted that the subscript labeling—those little numbers to the lower-right of the letter “R”—are unrelated to the resistor values in ohms. They serve only to identify one resistor from another.) The defining characteristic of a series circuit is that there is only one path for electrons to flow. In this circuit the electrons flow in a counter-clockwise direction, from point 4 to point 3 to point 2 to point 1 and back around to 4. Now, let’s look at the other type of circuit, a parallel configuration: Again, we have three resistors, but this time they form more than one continuous path for electrons to flow. There’s one path from 8 to 7 to 2 to 1 and back to 8 again. There’s another from 8 to 7 to 6 to 3 to 2 to 1 and back to 8 again. And then there’s a third path from 8 to 7 to 6 to 5 to 4 to 3 to 2 to 1 and back to 8 again. Each individual path (through R1, R2, and R3) is called a branch. The defining characteristic of a parallel circuit is that all components are connected between the same set of electrically common points. Looking at the schematic diagram, we see that points 1, 2, 3, and 4 are all electrically common. So are points 8, 7, 6, and 5. Note that all resistors as well as the battery are connected between these two sets of points. And, of course, the complexity doesn’t stop at simple series and parallel either! We can have circuits that are a combination of series and parallel, too: In this circuit, we have two loops for electrons to flow through: one from 6 to 5 to 2 to 1 and back to 6 again, and another from 6 to 5 to 4 to 3 to 2 to 1 and back to 6 again. Notice how both current paths go through R1(from point 2 to point 1). In this configuration, we’d say that R2 and R3 are in parallel with each other, while R1 is in series with the parallel combination of R2 and R3. This is just a preview of things to come. Don’t worry! We’ll explore all these circuit configurations in detail, one at a time! Learn the Basic Ideas of Series and Parallel Connection The basic idea of a “series” connection is that components are connected end-to-end in a line to form a single path for electrons to flow: The basic idea of a “parallel” connection, on the other hand, is that all components are connected across each other’s leads. In a purely parallel circuit, there are never more than two sets of electrically common points, no matter how many components are connected. There are many paths for electrons to flow, but only one voltage across all components: Series and parallel resistor configurations have very different electrical properties. We’ll explore the properties of each configuration in the sections to come. Review • In a series circuit, all components are connected end-to-end, forming a single path for electrons to flow. • In a parallel circuit, all components are connected across each other, forming exactly two sets of electrically common points. • A “branch” in a parallel circuit is a path for electric current formed by one of the load components (such as a resistor).
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/05%3A_Series_And_Parallel_Circuits/5.01%3A_What_are_Series_and_Parallel_Circuits.txt
Let’s start with a series circuit consisting of three resistors and a single battery: The first principle to understand about series circuits is that the amount of current is the same through any component in the circuit. This is because there is only one path for electrons to flow in a series circuit, and because free electrons flow through conductors like marbles in a tube, the rate of flow (marble speed) at any point in the circuit (tube) at any specific point in time must be equal. Using Ohm’s Law in Series Circuits From the way that the 9-volt battery is arranged, we can tell that the electrons in this circuit will flow in a counter-clockwise direction, from point 4 to 3 to 2 to 1 and back to 4. However, we have one source of voltage and three resistances. How do we use Ohm’s Law here? An important caveat to Ohm’s Law is that all quantities (voltage, current, resistance, and power) must relate to each other in terms of the same two points in a circuit. For instance, with a single-battery, single-resistor circuit, we could easily calculate any quantity because they all applied to the same two points in the circuit: Since points 1 and 2 are connected together with wire of negligible resistance, as are points 3 and 4, we can say that point 1 is electrically common to point 2, and that point 3 is electrically common to point 4. Since we know we have 9 volts of electromotive force between points 1 and 4 (directly across the battery), and since point 2 is common to point 1 and point 3 common to point 4, we must also have 9 volts between points 2 and 3 (directly across the resistor). Therefore, we can apply Ohm’s Law (I = E/R) to the current through the resistor, because we know the voltage (E) across the resistor and the resistance (R) of that resistor. All terms (E, I, R) apply to the same two points in the circuit, to that same resistor, so we can use the Ohm’s Law formula with no reservation. Using Ohm’s Law in Circuits with Multiple Resistors However, in circuits containing more than one resistor, we must be careful in how we apply Ohm’s Law. In the three-resistor example circuit below, we know that we have 9 volts between points 1 and 4, which is the amount of electromotive force trying to push electrons through the series combination of R1, R2, and R3. However, we cannot take the value of 9 volts and divide it by 3k, 10k or 5k Ω to try to find a current value, because we don’t know how much voltage is across any one of those resistors, individually. The figure of 9 volts is a total quantity for the whole circuit, whereas the figures of 3k, 10k, and 5k Ω are individual quantities for individual resistors. If we were to plug a figure for total voltage into an Ohm’s Law equation with a figure for individual resistance, the result would not relate accurately to any quantity in the real circuit. For R1, Ohm’s Law will relate the amount of voltage across R1 with the current through R1, given R1‘s resistance, 3kΩ: But, since we don’t know the voltage across R1 (only the total voltage supplied by the battery across the three-resistor series combination) and we don’t know the current through R1, we can’t do any calculations with either formula. The same goes for R2 and R3: we can apply the Ohm’s Law equations if and only if all terms are representative of their respective quantities between the same two points in the circuit. So what can we do? We know the voltage of the source (9 volts) applied across the series combination of R1, R2, and R3, and we know the resistance of each resistor, but since those quantities aren’t in the same context, we can’t use Ohm’s Law to determine the circuit current. If only we knew what the total resistance was for the circuit: then we could calculate total current with our figure for total voltage (I=E/R). This brings us to the second principle of series circuits: the total resistance of any series circuit is equal to the sum of the individual resistances. This should make intuitive sense: the more resistors in series that the electrons must flow through, the more difficult it will be for those electrons to flow. In the example problem, we had a 3 kΩ, 10 kΩ, and 5 kΩ resistor in series, giving us a total resistance of 18 kΩ: In essence, we’ve calculated the equivalent resistance of R1, R2, and R3 combined. Knowing this, we could re-draw the circuit with a single equivalent resistor representing the series combination of R1, R2, and R3: Calculating Circuit Current Now we have all the necessary information to calculate circuit current because we have the voltage between points 1 and 4 (9 volts) and the resistance between points 1 and 4 (18 kΩ): Knowing that current is equal through all components of a series circuit (and we just determined the current through the battery), we can go back to our original circuit schematic and note the current through each component: Now that we know the amount of current through each resistor, we can use Ohm’s Law to determine the voltage drop across each one (applying Ohm’s Law in its proper context): Notice the voltage drops across each resistor, and how the sum of the voltage drops (1.5 + 5 + 2.5) is equal to the battery (supply) voltage: 9 volts. This is the third principle of series circuits: that the supply voltage is equal to the sum of the individual voltage drops. However, the method we just used to analyze this simple series circuit can be streamlined for better understanding. By using a table to list all voltages, currents, and resistance in the circuit, it becomes very easy to see which of those quantities can be properly related in any Ohm’s Law equation: The rule with such a table is to apply Ohm’s Law only to the values within each vertical column. For instance, ER1 only with IR1 and R1; ER2 only with IR2 and R2; etc. You begin your analysis by filling in those elements of the table that are given to you from the beginning: As you can see from the arrangement of the data, we can’t apply the 9 volts of ET (total voltage) to any of the resistances (R1, R2, or R3) in any Ohm’s Law formula because they’re in different columns. The 9 volts of battery voltage is not applied directly across R1, R2, or R3. However, we can use our “rules” of series circuits to fill in blank spots on a horizontal row. In this case, we can use the series rule of resistances to determine a total resistance from the sum of individual resistances: Now, with a value for total resistance inserted into the rightmost (“Total”) column, we can apply Ohm’s Law of I=E/R to total voltage and total resistance to arrive at a total current of 500 µA: Then, knowing that the current is shared equally by all components of a series circuit (another “rule” of series circuits), we can fill in the currents for each resistor from the current figure just calculated: Finally, we can use Ohm’s Law to determine the voltage drop across each resistor, one column at a time: Verifying Calculations with Computer Analysis Just for fun, we can use a computer to analyze this very same circuit automatically. It will be a good way to verify our calculations and also become more familiar with computer analysis. First, we have to describe the circuit to the computer in a format recognizable by the software. The SPICE program we’ll be using requires that all electrically unique points in a circuit be numbered, and component placement is understood by which of those numbered points, or “nodes,” they share. For clarity, I numbered the four corners of our example circuit 1 through 4. SPICE, however, demands that there be a node zero somewhere in the circuit, so I’ll re-draw the circuit, changing the numbering scheme slightly: All I’ve done here is re-numbered the lower-left corner of the circuit 0 instead of 4. Now, I can enter several lines of text into a computer file describing the circuit in terms SPICE will understand, complete with a couple of extra lines of code directing the program to display voltage and current data for our viewing pleasure. This computer file is known as the netlist in SPICE terminology: Now, all I have to do is run the SPICE program to process the netlist and output the results: This printout is telling us the battery voltage is 9 volts, and the voltage drops across R1, R2, and R3 are 1.5 volts, 5 volts, and 2.5 volts, respectively. Voltage drops across any component in SPICE are referenced by the node numbers the component lies between, so v(1,2) is referencing the voltage between nodes 1 and 2 in the circuit, which are the points between which R1 is located. The order of node numbers is important: when SPICE outputs a figure for v(1,2), it regards the polarity the same way as if we were holding a voltmeter with the red test lead on node 1 and the black test lead on node 2. We also have a display showing current (albeit with a negative value) at 0.5 milliamps, or 500 microamps. So our mathematical analysis has been vindicated by the computer. This figure appears as a negative number in the SPICE analysis, due to a quirk in the way SPICE handles current calculations. Review of Basic Series Circuit Characteristics In summary, a series circuit is defined as having only one path for electrons to flow. From this definition, three rules of series circuits follow: all components share the same current; resistances add to equal a larger, total resistance; and voltage drops add to equal a larger, total voltage. All of these rules find root in the definition of a series circuit. If you understand that definition fully, then the rules are nothing more than footnotes to the definition. Review • Components in a series circuit share the same current: ITotal = I1 = I2 = . . . In • Total resistance in a series circuit is equal to the sum of the individual resistances: RTotal = R1 + R2 + . . . Rn • Total voltage in a series circuit is equal to the sum of the individual voltage drops: ETotal = E1 + E2 + . . . En
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/05%3A_Series_And_Parallel_Circuits/5.02%3A_Simple_Series_Circuits.txt
Let’s start with a parallel circuit consisting of three resistors and a single battery: The Principle of Parallel Circuits The first principle to understand about parallel circuits is that the voltage is equal across all components in the circuit. This is because there are only two sets of electrically common points in a parallel circuit, and voltage measured between sets of common points must always be the same at any given time. Therefore, in the above circuit, the voltage across R1 is equal to the voltage across R2 which is equal to the voltage across R3 which is equal to the voltage across the battery. This equality of voltages can be represented in another table for our starting values: Ohm’s Law Applications for Simple Parallel Circuits Just as in the case of series circuits, the same caveat for Ohm’s Law applies: values for voltage, current, and resistance must be in the same context in order for the calculations to work correctly. However, in the above example circuit, we can immediately apply Ohm’s Law to each resistor to find its current because we know the voltage across each resistor (9 volts) and the resistance of each resistor: At this point we still don’t know what the total current or total resistance for this parallel circuit is, so we can’t apply Ohm’s Law to the rightmost (“Total”) column. However, if we think carefully about what is happening it should become apparent that the total current must equal the sum of all individual resistor (“branch”) currents: As the total current exits the negative (-) battery terminal at point 8 and travels through the circuit, some of the flow splits off at point 7 to go up through R1, some more splits off at point 6 to go up through R2, and the remainder goes up through R3. Like a river branching into several smaller streams, the combined flow rates of all streams must equal the flow rate of the whole river. The same thing is encountered where the currents through R1, R2, and R3 join to flow back to the positive terminal of the battery (+) toward point 1: the flow of electrons from point 2 to point 1 must equal the sum of the (branch) currents through R1, R2, and R3. This is the second principle of parallel circuits: the total circuit current is equal to the sum of the individual branch currents. Using this principle, we can fill in the IT spot on our table with the sum of IR1, IR2, and IR3: Finally, applying Ohm’s Law to the rightmost (“Total”) column, we can calculate the total circuit resistance: The Equation for Parallel Circuits Please note something very important here. The total circuit resistance is only 625 Ω: less than any one of the individual resistors. In the series circuit, where the total resistance was the sum of the individual resistances, the total was bound to be greater than any one of the resistors individually. Here in the parallel circuit, however, the opposite is true: we say that the individual resistances diminish rather than add to make the total. This principle completes our triad of “rules” for parallel circuits, just as series circuits were found to have three rules for voltage, current, and resistance. Mathematically, the relationship between total resistance and individual resistances in a parallel circuit looks like this: The same basic form of equation works for any number of resistors connected together in parallel, just add as many 1/R terms on the denominator of the fraction as needed to accommodate all parallel resistors in the circuit. Just as with the series circuit, we can use computer analysis to double-check our calculations. First, of course, we have to describe our example circuit to the computer in terms it can understand. I’ll start by re-drawing the circuit: How to Alter Parallel Circuit Numbering Schemes for SPICE Once again we find that the original numbering scheme used to identify points in the circuit will have to be altered for the benefit of SPICE. In SPICE, all electrically common points must share identical node numbers. This is how SPICE knows what’s connected to what, and how. In a simple parallel circuit, all points are electrically common in one of two sets of points. For our example circuit, the wire connecting the tops of all the components will have one node number and the wire connecting the bottoms of the components will have the other. Staying true to the convention of including zero as a node number, I choose the numbers 0 and 1: An example like this makes the rationale of node numbers in SPICE fairly clear to understand. By having all components share common sets of numbers, the computer “knows” they’re all connected in parallel with each other. In order to display branch currents in SPICE, we need to insert zero-voltage sources in line (in series) with each resistor, and then reference our current measurements to those sources. For whatever reason, the creators of the SPICE program made it so that current could only be calculated through a voltage source. This is a somewhat annoying demand of the SPICE simulation program. With each of these “dummy” voltage sources added, some new node numbers must be created to connect them to their respective branch resistors: How to Verify Computer Analysis Results The dummy voltage sources are all set at 0 volts so as to have no impact on the operation of the circuit. The circuit description file, or netlist, looks like this: Running the computer analysis, we get these results (I’ve annotated the printout with descriptive labels): These values do indeed match those calculated through Ohm’s Law earlier: 0.9 mA for IR1, 4.5 mA for IR2, and 9 mA for IR3. Being connected in parallel, of course, all resistors have the same voltage dropped across them (9 volts, same as the battery). Three Rules of Parallel Circuits In summary, a parallel circuit is defined as one where all components are connected between the same set of electrically common points. Another way of saying this is that all components are connected across each other’s terminals. From this definition, three rules of parallel circuits follow: all components share the same voltage; resistances diminish to equal a smaller, total resistance; and branch currents add to equal a larger, total current. Just as in the case of series circuits, all of these rules find root in the definition of a parallel circuit. If you understand that definition fully, then the rules are nothing more than footnotes to the definition. Review • Components in a parallel circuit share the same voltage: ETotal = E1 = E2 = . . . En • Total resistance in a parallel circuit is less than any of the individual resistances: RTotal = 1 / (1/R1 + 1/R2+ . . . 1/Rn) • Total current in a parallel circuit is equal to the sum of the individual branch currents: ITotal = I1 + I2 + . . . In.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/05%3A_Series_And_Parallel_Circuits/5.03%3A_Simple_Parallel_Circuits.txt
When students first see the parallel resistance equation, the natural question to ask is, “Where did that thing come from?” It is truly an odd piece of arithmetic, and its origin deserves a good explanation. What is the Difference Between Resistance and Conductance? Resistance, by definition, is the measure of friction a component presents to the flow of electrons through it. Resistance is symbolized by the capital letter “R” and is measured in the unit of “ohm.” However, we can also think of this electrical property in terms of its inverse: how easy it is for electrons to flow through a component, rather than how difficult. If resistance is the word we use to symbolize the measure of how difficult it is for electrons to flow, then a good word to express how easy it is for electrons to flow would be conductance. Mathematically, conductance is the reciprocal, or inverse, of resistance: The greater the resistance, the less the conductance, and vice versa. This should make intuitive sense, resistance and conductance being opposite ways to denote the same essential electrical property. If two components’ resistances are compared and it is found that component “A” has one-half the resistance of component “B,” then we could alternatively express this relationship by saying that component “A” is twiceas conductive as component “B.” If component “A” has but one-third the resistance of component “B,” then we could say it is three times more conductive than component “B,” and so on. Carrying this idea further, a symbol and unit were created to represent conductance. The symbol is the capital letter “G” and the unit is the mho, which is “ohm” spelled backwards (and you didn’t think electronics engineers had any sense of humor!). Despite its appropriateness, the unit of the mho was replaced in later years by the unit of siemens (abbreviated by the capital letter “S”). This decision to change unit names is reminiscent of the change from the temperature unit of degrees Centigrade to degrees Celsius, or the change from the unit of frequency c.p.s. (cycles per second) to Hertz. If you’re looking for a pattern here, Siemens, Celsius, and Hertz are all surnames of famous scientists, the names of which, sadly, tell us less about the nature of the units than the units’ original designations. As a footnote, the unit of siemens is never expressed without the last letter “s.” In other words, there is no such thing as a unit of “siemen” as there is in the case of the “ohm” or the “mho.” The reason for this is the proper spelling of the respective scientists’ surnames. The unit for electrical resistance was named after someone named “Ohm,” whereas the unit for electrical conductance was named after someone named “Siemens,” therefore it would be improper to “singularize” the latter unit as its final “s” does not denote plurality. Back to our parallel circuit example, we should be able to see that multiple paths (branches) for current reduces total resistance for the whole circuit, as electrons are able to flow easier through the whole network of multiple branches than through any one of those branch resistances alone. In terms of resistance, additional branches result in a lesser total (current meets with less opposition). In terms of conductance, however, additional branches results in a greater total (electrons flow with greater conductance): Total Parallel Resistance Total parallel resistance is less than any one of the individual branch resistances because parallel resistors resist less together than they would separately: Total Parallel Conductance Total parallel conductance is greater than any of the individual branch conductances because parallel resistors conduct better together than they would separately: To be more precise, the total conductance in a parallel circuit is equal to the sum of the individual conductances: If we know that conductance is nothing more than the mathematical reciprocal (1/x) of resistance, we can translate each term of the above formula into resistance by substituting the reciprocal of each respective conductance: Solving the above equation for total resistance (instead of the reciprocal of total resistance), we can invert (reciprocate) both sides of the equation: So, we arrive at our cryptic resistance formula at last! Conductance (G) is seldom used as a practical measurement, and so the above formula is a common one to see in the analysis of parallel circuits. Review • Conductance is the opposite of resistance: the measure of how easy it is for electrons to flow through something. • Conductance is symbolized with the letter “G” and is measured in units of mhos or Siemens. • Mathematically, conductance equals the reciprocal of resistance: G = 1/R
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/05%3A_Series_And_Parallel_Circuits/5.04%3A_Conductance.txt
When calculating the power dissipation of resistive components, use any one of the three power equations to derive the answer from values of voltage, current, and/or resistance pertaining to each component: This is easily managed by adding another row to our familiar table of voltages, currents, and resistances: Power for any particular table column can be found by the appropriate Ohm’s Law equation (appropriate based on what figures are present for E, I, and R in that column). An interesting rule for total power versus individual power is that it is additive for any configuration of circuit: series, parallel, series/parallel, or otherwise. Power is a measure of rate of work, and since power dissipated must equal the total power applied by the source(s) (as per the Law of Conservation of Energy in physics), circuit configuration has no effect on the mathematics. Review • Power is additive in any configuration of resistive circuit: PTotal = P1 + P2 + . . . Pn 5.06: Correct use of Ohms Law One of the most common mistakes made by beginning electronics students in their application of Ohm’s Laws is mixing the contexts of voltage, current, and resistance. In other words, a student might mistakenly use a value for I through one resistor and the value for E across a set of interconnected resistors, thinking that they’ll arrive at the resistance of that one resistor. Not so! Remember this important rule: The variables used in Ohm’s Law equations must be common to the same two points in the circuit under consideration. I cannot overemphasize this rule. This is especially important in series-parallel combination circuits where nearby components may have different values for both voltage drop and current. When using Ohm’s Law to calculate a variable pertaining to a single component, be sure the voltage you’re referencing is solely across that single component and the current you’re referencing is solely through that single component and the resistance you’re referencing is solely for that single component. Likewise, when calculating a variable pertaining to a set of components in a circuit, be sure that the voltage, current, and resistance values are specific to that complete set of components only! A good way to remember this is to pay close attention to the two points terminating the component or set of components being analyzed, making sure that the voltage in question is across those two points, that the current in question is the electron flow from one of those points all the way to the other point, that the resistance in question is the equivalent of a single resistor between those two points, and that the power in question is the total power dissipated by all components between those two points. The “table” method presented for both series and parallel circuits in this chapter is a good way to keep the context of Ohm’s Law correct for any kind of circuit configuration. In a table like the one shown below, you are only allowed to apply an Ohm’s Law equation for the values of a single vertical column at a time: Deriving values horizontally across columns is allowable as per the principles of series and parallel circuits: Not only does the “table” method simplify the management of all relevant quantities, it also facilitates cross-checking of answers by making it easy to solve for the original unknown variables through other methods, or by working backwards to solve for the initially given values from your solutions. For example, if you have just solved for all unknown voltages, currents, and resistances in a circuit, you can check your work by adding a row at the bottom for power calculations on each resistor, seeing whether or not all the individual power values add up to the total power. If not, then you must have made a mistake somewhere! While this technique of “cross-checking” your work is nothing new, using the table to arrange all the data for the cross-check(s) results in a minimum of confusion. Review • Apply Ohm’s Law to vertical columns in the table. • Apply rules of series/parallel to horizontal rows in the table. • Check your calculations by working “backwards” to try to arrive at originally given values (from your first calculated answers), or by solving for a quantity using more than one method (from different given values).
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/05%3A_Series_And_Parallel_Circuits/5.05%3A_Power_Calculations.txt
The job of a technician frequently entails “troubleshooting” (locating and correcting a problem) in malfunctioning circuits. Good troubleshooting is a demanding and rewarding effort, requiring a thorough understanding of the basic concepts, the ability to formulate hypotheses (proposed explanations of an effect), the ability to judge the value of different hypotheses based on their probability (how likely one particular cause may be over another), and a sense of creativity in applying a solution to rectify the problem. While it is possible to distill these skills into a scientific methodology, most practiced troubleshooters would agree that troubleshooting involves a touch of art, and that it can take years of experience to fully develop this art. An essential skill to have is a ready and intuitive understanding of how component faults affect circuits in different configurations. We will explore some of the effects of component faults in both series and parallel circuits here, then to a greater degree at the end of the “Series-Parallel Combination Circuits” chapter. Let’s start with a simple series circuit: With all components in this circuit functioning at their proper values, we can mathematically determine all currents and voltage drops: Now let us suppose that R2 fails shorted. Shorted means that the resistor now acts like a straight piece of wire, with little or no resistance. The circuit will behave as though a “jumper” wire were connected across R2(in case you were wondering, “jumper wire” is a common term for a temporary wire connection in a circuit). What causes the shorted condition of R2 is no matter to us in this example; we only care about its effect upon the circuit: With R2 shorted, either by a jumper wire or by an internal resistor failure, the total circuit resistance will decrease. Since the voltage output by the battery is a constant (at least in our ideal simulation here), a decrease in total circuit resistance means that total circuit current must increase: As the circuit current increases from 20 milliamps to 60 milliamps, the voltage drops across R1 and R3(which haven’t changed resistances) increase as well, so that the two resistors are dropping the whole 9 volts. R2, being bypassed by the very low resistance of the jumper wire, is effectively eliminated from the circuit, the resistance from one lead to the other having been reduced to zero. Thus, the voltage drop across R2, even with the increased total current, is zero volts. On the other hand, if R2 were to fail “open”—resistance increasing to nearly infinite levels—it would also create wide-reaching effects in the rest of the circuit: With R2 at infinite resistance and total resistance being the sum of all individual resistances in a series circuit, the total current decreases to zero. With zero circuit current, there is no electron flow to produce voltage drops across R1 or R3. R2, on the other hand, will manifest the full supply voltage across its terminals. We can apply the same before/after analysis technique to parallel circuits as well. First, we determine what a “healthy” parallel circuit should behave like. Supposing that R2 opens in this parallel circuit, here’s what the effects will be: Notice that in this parallel circuit, an open branch only affects the current through that branch and the circuit’s total current. Total voltage—being shared equally across all components in a parallel circuit, will be the same for all resistors. Due to the fact that the voltage source’s tendency is to hold voltage constant, its voltage will not change, and being in parallel with all the resistors, it will hold all the resistors’ voltages the same as they were before: 9 volts. Being that voltage is the only common parameter in a parallel circuit, and the other resistors haven’t changed resistance value, their respective branch currents remain unchanged. This is what happens in a household lamp circuit: all lamps get their operating voltage from power wiring arranged in a parallel fashion. Turning one lamp on and off (one branch in that parallel circuit closing and opening) doesn’t affect the operation of other lamps in the room, only the current in that one lamp (branch circuit) and the total current powering all the lamps in the room: In an ideal case (with perfect voltage sources and zero-resistance connecting wire), shorted resistors in a simple parallel circuit will also have no effect on what’s happening in other branches of the circuit. In real life, the effect is not quite the same, and we’ll see why in the following example: A shorted resistor (resistance of 0 Ω) would theoretically draw infinite current from any finite source of voltage (I=E/0). In this case, the zero resistance of R2 decreases the circuit total resistance to zero Ω as well, increasing total current to a value of infinity. As long as the voltage source holds steady at 9 volts, however, the other branch currents (IR1 and IR3) will remain unchanged. The critical assumption in this “perfect” scheme, however, is that the voltage supply will hold steady at its rated voltage while supplying an infinite amount of current to a short-circuit load. This is simply not realistic. Even if the short has a small amount of resistance (as opposed to absolutely zero resistance), no real voltage source could arbitrarily supply a huge overload current and maintain steady voltage at the same time. This is primarily due to the internal resistance intrinsic to all electrical power sources, stemming from the inescapable physical properties of the materials they’re constructed of: These internal resistances, small as they may be, turn our simple parallel circuit into a series-parallel combination circuit. Usually, the internal resistances of voltage sources are low enough that they can be safely ignored, but when high currents resulting from shorted components are encountered, their effects become very noticeable. In this case, a shorted R2 would result in almost all the voltage being dropped across the internal resistance of the battery, with almost no voltage left over for resistors R1, R2, and R3: Suffice it to say, intentional direct short-circuits across the terminals of any voltage source is a bad idea. Even if the resulting high current (heat, flashes, sparks) causes no harm to people nearby, the voltage source will likely sustain damage, unless it has been specifically designed to handle short-circuits, which most voltage sources are not. Eventually in this book I will lead you through the analysis of circuits without the use of any numbers, that is, analyzing the effects of component failure in a circuit without knowing exactly how many volts the battery produces, how many ohms of resistance is in each resistor, etc. This section serves as an introductory step to that kind of analysis. Whereas the normal application of Ohm’s Law and the rules of series and parallel circuits is performed with numerical quantities (“quantitative”), this new kind of analysis without precise numerical figures is something I like to call qualitative analysis. In other words, we will be analyzing the qualities of the effects in a circuit rather than the precise quantities. The result, for you, will be a much deeper intuitive understanding of electric circuit operation. Review • To determine what would happen in a circuit if a component fails, re-draw that circuit with the equivalent resistance of the failed component in place and re-calculate all values. • The ability to intuitively determine what will happen to a circuit with any given component fault is a crucial skill for any electronics troubleshooter to develop. The best way to learn is to experiment with circuit calculations and real-life circuits, paying close attention to what changes with a fault, what remains the same, and why! • A shorted component is one whose resistance has dramatically decreased. • An open component is one whose resistance has dramatically increased. For the record, resistors tend to fail open more often than fail shorted, and they almost never fail unless physically or electrically overstressed (physically abused or overheated).
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/05%3A_Series_And_Parallel_Circuits/5.07%3A_Component_Failure_Analysis.txt
How to Build a Simple Series Circuit If all we wish to construct is a simple single-battery, single-resistor circuit, we may easily use alligator clip jumper wires like this: Jumper wires with “alligator” style spring clips at each end provide a safe and convenient method of electrically joining components together. If we wanted to build a simple series circuit with one battery and three resistors, the same “point-to-point” construction technique using jumper wires could be applied: Using a Solderless Breadboard for More Complex Circuits This technique, however, proves impractical for circuits much more complex than this, due to the awkwardness of the jumper wires and the physical fragility of their connections. A more common method of temporary construction for the hobbyist is the solderless breadboard, a device made of plastic with hundreds of spring-loaded connection sockets joining the inserted ends of components and/or 22-gauge solid wire pieces. A photograph of a real breadboard is shown here, followed by an illustration showing a simple series circuit constructed on one: Underneath each hole in the breadboard face is a metal spring clip, designed to grasp any inserted wire or component lead. These metal spring clips are joined underneath the breadboard face, making connections between inserted leads. The connection pattern joins every five holes along a vertical column (as shown with the long axis of the breadboard situated horizontally): Thus, when a wire or component lead is inserted into a hole on the breadboard, there are four more holes in that column providing potential connection points to other wires and/or component leads. The result is an extremely flexible platform for constructing temporary circuits. For example, the three-resistor circuit just shown could also be built on a breadboard like this: A parallel circuit is also easy to construct on a solderless breadboard: Breadboards have their limitations, though. First and foremost, they are intended for temporary construction only. If you pick up a breadboard, turn it upside-down, and shake it, any components plugged into it are sure to loosen, and may fall out of their respective holes. Also, breadboards are limited to fairly low-current (less than 1 amp) circuits. Those spring clips have a small contact area, and thus cannot support high currents without excessive heating. Soldering or Wire-Wrapping For greater permanence, one might wish to choose soldering or wire-wrapping. These techniques involve fastening the components and wires to some structure providing a secure mechanical location (such as a phenolic or fiberglass board with holes drilled in it, much like a breadboard without the intrinsic spring-clip connections), and then attaching wires to the secured component leads. Soldering is a form of low-temperature welding, using a tin/lead or tin/silver alloy that melts to and electrically bonds copper objects. Wire ends soldered to component leads or to small, copper ring “pads” bonded on the surface of the circuit board serve to connect the components together. In wire wrapping, a small-gauge wire is tightly wrapped around component leads rather than soldered to leads or copper pads, the tension of the wrapped wire providing a sound mechanical and electrical junction to connect components together. An example of a printed circuit board, or PCB, intended for hobbyist use is shown in this photograph: This board appears copper-side-up: the side where all the soldering is done. Each hole is ringed with a small layer of copper metal for bonding to the solder. All holes are independent of each other on this particular board, unlike the holes on a solderless breadboard which are connected together in groups of five. Printed circuit boards with the same 5-hole connection pattern as breadboards can be purchased and used for hobby circuit construction, though. Printed Circuit Boards (PCBs) Production printed circuit boards have traces of copper laid down on the phenolic or fiberglass substrate material to form pre-engineered connection pathways which function as wires in a circuit. An example of such a board is shown here, this unit actually a “power supply” circuit designed to take 120 volt alternating current (AC) power from a household wall socket and transform it into low-voltage direct current (DC). A resistor appears on this board, the fifth component counting up from the bottom, located in the middle-right area of the board. A view of this board’s underside reveals the copper “traces” connecting components together, as well as the silver-colored deposits of solder bonding the component leads to those traces: A soldered or wire-wrapped circuit is considered permanent: that is, it is unlikely to fall apart accidently. However, these construction techniques are sometimes considered too permanent. If anyone wishes to replace a component or change the circuit in any substantial way, they must invest a fair amount of time undoing the connections. Also, both soldering and wire-wrapping require specialized tools which may not be immediately available. Terminal Strips An alternative construction technique used throughout the industrial world is that of the terminal strip. Terminal strips, alternatively called barrier strips or terminal blocks, are comprised of a length of nonconducting material with several small bars of metal embedded within. Each metal bar has at least one machine screw or other fastener under which a wire or component lead may be secured. Multiple wires fastened by one screw are made electrically common to each other, as are wires fastened to multiple screws on the same bar. The following photograph shows one style of terminal strip, with a few wires attached. Another, smaller terminal strip is shown in this next photograph. This type, sometimes referred to as a “European” style, has recessed screws to help prevent accidental shorting between terminals by a screwdriver or other metal object: In the following illustration, a single-battery, three-resistor circuit is shown constructed on a terminal strip: If the terminal strip uses machine screws to hold the component and wire ends, nothing but a screwdriver is needed to secure new connections or break old connections. Some terminal strips use spring-loaded clips—similar to a breadboard’s except for increased ruggedness—engaged and disengaged using a screwdriver as a push tool (no twisting involved). The electrical connections established by a terminal strip are quite robust and are considered suitable for both permanent and temporary construction. Translating a Schematic Diagram to a Circuit Layout One of the essential skills for anyone interested in electricity and electronics is to be able to “translate” a schematic diagram to a real circuit layout where the components may not be oriented the same way. Schematic diagrams are usually drawn for maximum readability (excepting those few noteworthy examples sketched to create maximum confusion!), but practical circuit construction often demands a different component orientation. Building simple circuits on terminal strips is one way to develop the spatial-reasoning skill of “stretching” wires to make the same connection paths. Consider the case of a single-battery, three-resistor parallel circuit constructed on a terminal strip: Progressing from a nice, neat, schematic diagram to the real circuit—especially when the resistors to be connected are physically arranged in a linear fashion on the terminal strip—is not obvious to many, so I’ll outline the process step-by-step. First, start with the clean schematic diagram and all components secured to the terminal strip, with no connecting wires: Next, trace the wire connection from one side of the battery to the first component in the schematic, securing a connecting wire between the same two points on the real circuit. I find it helpful to over-draw the schematic’s wire with another line to indicate what connections I’ve made in real life: Continue this process, wire by wire, until all connections in the schematic diagram have been accounted for. It might be helpful to regard common wires in a SPICE-like fashion: make all connections to a common wire in the circuit as one step, making sure each and every component with a connection to that wire actually has a connection to that wire before proceeding to the next. For the next step, I’ll show how the top sides of the remaining two resistors are connected together, being common with the wire secured in the previous step: With the top sides of all resistors (as shown in the schematic) connected together, and to the battery’s positive (+) terminal, all we have to do now is connect the bottom sides together and to the other side of the battery: Typically in industry, all wires are labeled with number tags, and electrically common wires bear the same tag number, just as they do in a SPICE simulation. In this case, we could label the wires 1 and 2: Another industrial convention is to modify the schematic diagram slightly so as to indicate actual wire connection points on the terminal strip. This demands a labeling system for the strip itself: a “TB” number (terminal block number) for the strip, followed by another number representing each metal bar on the strip. This way, the schematic may be used as a “map” to locate points in a real circuit, regardless of how tangled and complex the connecting wiring may appear to the eyes. This may seem excessive for the simple, three-resistor circuit shown here, but such detail is absolutely necessary for construction and maintenance of large circuits, especially when those circuits may span a great physical distance, using more than one terminal strip located in more than one panel or box. Review • A solderless breadboard is a device used to quickly assemble temporary circuits by plugging wires and components into electrically common spring-clips arranged underneath rows of holes in a plastic board. • Soldering is a low-temperature welding process utilizing a lead/tin or tin/silver alloy to bond wires and component leads together, usually with the components secured to a fiberglass board. • Wire-wrapping is an alternative to soldering, involving small-gauge wire tightly wrapped around component leads rather than a welded joint to connect components together. • A terminal strip, also known as a barrier strip or terminal block is another device used to mount components and wires to build circuits. Screw terminals or heavy spring clips attached to metal bars provide connection points for the wire ends and component leads, these metal bars mounted separately to a piece of nonconducting material such as plastic, bakelite, or ceramic.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/05%3A_Series_And_Parallel_Circuits/5.08%3A_Building_Simple_Resistor_Circuits.txt
Let’s analyze a simple series circuit, determining the voltage drops across individual resistors: Determine the Total Circuit Resistance From the given values of individual resistances, we can determine a total circuit resistance, knowing that resistances add in series: Use Ohm’s Law to Calculate Electron Flow From here, we can use Ohm’s Law (I=E/R) to determine the total current, which we know will be the same as each resistor current, currents being equal in all parts of a series circuit: Now, knowing that the circuit current is 2 mA, we can use Ohm’s Law (E=IR) to calculate voltage across each resistor: It should be apparent that the voltage drop across each resistor is proportional to its resistance, given that the current is the same through all resistors. Notice how the voltage across R2 is double that of the voltage across R1, just as the resistance of R2 is double that of R1. If we were to change the total voltage, we would find this proportionality of voltage drops remains constant: The voltage across R2 is still exactly twice that of R1‘s drop, despite the fact that the source voltage has changed. The proportionality of voltage drops (ratio of one to another) is strictly a function of resistance values. With a little more observation, it becomes apparent that the voltage drop across each resistor is also a fixed proportion of the supply voltage. The voltage across R1, for example, was 10 volts when the battery supply was 45 volts. When the battery voltage was increased to 180 volts (4 times as much), the voltage drop across R1 also increased by a factor of 4 (from 10 to 40 volts). The ratio between R1‘s voltage drop and total voltage, however, did not change: Likewise, none of the other voltage drop ratios changed with the increased supply voltage either: Voltage Divider Formula For this reason a series circuit is often called a voltage divider for its ability to proportion—or divide—the total voltage into fractional portions of constant ratio. With a little bit of algebra, we can derive a formula for determining series resistor voltage drop given nothing more than total voltage, individual resistance, and total resistance: The ratio of individual resistance to total resistance is the same as the ratio of individual voltage drop to total supply voltage in a voltage divider circuit. This is known as the voltage divider formula, and it is a short-cut method for determining voltage drop in a series circuit without going through the current calculation(s) of Ohm’s Law. Using this formula, we can re-analyze the example circuit’s voltage drops in fewer steps: Voltage dividers find wide application in electric meter circuits, where specific combinations of series resistors are used to “divide” a voltage into precise proportions as part of a voltage measurement device. One device frequently used as a voltage-dividing component is the potentiometer, which is a resistor with a movable element positioned by a manual knob or lever. The movable element, typically called a wiper, makes contact with a resistive strip of material (commonly called the slidewire if made of resistive metal wire) at any point selected by the manual control: The wiper contact is the left-facing arrow symbol drawn in the middle of the vertical resistor element. As it is moved up, it contacts the resistive strip closer to terminal 1 and further away from terminal 2, lowering resistance to terminal 1 and raising resistance to terminal 2. As it is moved down, the opposite effect results. The resistance as measured between terminals 1 and 2 is constant for any wiper position. Rotary vs. Linear Potentiometers Shown here are internal illustrations of two potentiometer types, rotary and linear: Some linear potentiometers are actuated by straight-line motion of a lever or slide button. Others, like the one depicted in the previous illustration, are actuated by a turn-screw for fine adjustment ability. The latter units are sometimes referred to as trimpots, because they work well for applications requiring a variable resistance to be “trimmed” to some precise value. It should be noted that not all linear potentiometers have the same terminal assignments as shown in this illustration. With some, the wiper terminal is in the middle, between the two end terminals. The following photograph shows a real, rotary potentiometer with exposed wiper and slidewire for easy viewing. The shaft which moves the wiper has been turned almost fully clockwise so that the wiper is nearly touching the left terminal end of the slidewire: Here is the same potentiometer with the wiper shaft moved almost to the full-counterclockwise position, so that the wiper is near the other extreme end of travel: If a constant voltage is applied between the outer terminals (across the length of the slidewire), the wiper position will tap off a fraction of the applied voltage, measurable between the wiper contact and either of the other two terminals. The fractional value depends entirely on the physical position of the wiper: The Importance of Potentiometer Application Just like the fixed voltage divider, the potentiometer’s voltage division ratio is strictly a function of resistance and not of the magnitude of applied voltage. In other words, if the potentiometer knob or lever is moved to the 50 percent (exact center) position, the voltage dropped between wiper and either outside terminal would be exactly 1/2 of the applied voltage, no matter what that voltage happens to be, or what the end-to-end resistance of the potentiometer is. In other words, a potentiometer functions as a variable voltage divider where the voltage division ratio is set by wiper position. This application of the potentiometer is a very useful means of obtaining a variable voltage from a fixed-voltage source such as a battery. If a circuit you’re building requires a certain amount of voltage that is less than the value of an available battery’s voltage, you may connect the outer terminals of a potentiometer across that battery and “dial-up” whatever voltage you need between the potentiometer wiper and one of the outer terminals for use in your circuit: When used in this manner, the name potentiometer makes perfect sense: they meter (control) the potential(voltage) applied across them by creating a variable voltage-divider ratio. This use of the three-terminal potentiometer as a variable voltage divider is very popular in circuit design. Shown here are several small potentiometers of the kind commonly used in consumer electronic equipment and by hobbyists and students in constructing circuits: The smaller units on the very left and very right are designed to plug into a solderless breadboard or be soldered into a printed circuit board. The middle units are designed to be mounted on a flat panel with wires soldered to each of the three terminals. Here are three more potentiometers, more specialized than the set just shown: The large “Helipot” unit is a laboratory potentiometer designed for quick and easy connection to a circuit. The unit in the lower-left corner of the photograph is the same type of potentiometer, just without a case or 10-turn counting dial. Both of these potentiometers are precision units, using multi-turn helical-track resistance strips and wiper mechanisms for making small adjustments. The unit on the lower-right is a panel-mount potentiometer, designed for rough service in industrial applications. Review • Series circuits proportion, or divide, the total supply voltage among individual voltage drops, the proportions being strictly dependent upon resistances: ERn = ETotal (Rn / RTotal) • A potentiometer is a variable-resistance component with three connection points, frequently used as an adjustable voltage divider.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/06%3A_Divider_Circuits_and_Kirchhoff's_Laws/6.01%3A_Voltage_Divider_Circuits.txt
What is Kirchhoff’s Voltage Law (KVL)? The principle known as Kirchhoff’s Voltage Law (discovered in 1847 by Gustav R. Kirchhoff, a German physicist) can be stated as such: “The algebraic sum of all voltages in a loop must equal zero” By algebraic, I mean accounting for signs (polarities) as well as magnitudes. By loop, I mean any path traced from one point in a circuit around to other points in that circuit, and finally back to the initial point. Demonstrating Kirchhoff’s Voltage Law in a Series Circuit Let’s take another look at our series circuit from the previous page as an example, this time numbering the points in the circuit for voltage reference: If we were to connect a voltmeter between points 2 and 1, red test lead to point 2 and black test lead to point 1, the meter would register +45 volts. Typically the “+” sign is not shown, but rather implied, for positive readings in digital meter displays. However, for this lesson the polarity of the voltage reading is very important and so I will show positive numbers explicitly: When a voltage is specified with a double subscript (the characters “2-1” in the notation “E2-1”), it means the voltage at the first point (2) as measured in reference to the second point (1). A voltage specified as “Ecd” would mean the voltage as indicated by a digital meter with the red test lead on point “c” and the black test lead on point “d”: the voltage at “c” in reference to “d”. If we were to take that same voltmeter and measure the voltage drop across each resistor, stepping around the circuit in a clockwise direction with the red test lead of our meter on the point ahead and the black test lead on the point behind, we would obtain the following readings: We should already be familiar with the general principle for series circuits stating that individual voltage drops add up to the total applied voltage, but measuring voltage drops in this manner and paying attention to the polarity (mathematical sign) of the readings reveals another facet of this principle: that the voltages measured as such all add up to zero: In the above example, the loop was formed by following points in this order: 1-2-3-4-1. It doesn’t matter which point we start at or which direction we proceed in tracing the loop; the voltage sum will still equal zero. To demonstrate, we can tally up the voltages in loop 3-2-1-4-3 of the same circuit: This may make more sense if we re-draw our example series circuit so that all components are represented in a straight line: It’s still the same series circuit, just with the components arranged in a different form. Notice the polarities of the resistor voltage drops with respect to the battery: the battery’s voltage is negative on the left and positive on the right, whereas all the resistor voltage drops are oriented the other way: positive on the left and negative on the right. This is because the resistors are resisting the flow of electrons being pushed by the battery. In other words, the “push” exerted by the resistors against the flow of electrons must be in a direction opposite the source of electromotive force. Here we see what a digital voltmeter would indicate across each component in this circuit, black lead on the left and red lead on the right, as laid out in horizontal fashion: If we were to take that same voltmeter and read voltage across combinations of components, starting with only R1 on the left and progressing across the whole string of components, we will see how the voltages add algebraically (to zero): The fact that series voltages add up should be no mystery, but we notice that the polarity of these voltages makes a lot of difference in how the figures add. While reading voltage across R1, R1—R2, and R1—R2—R3(I’m using a “double-dash” symbol “—” to represent the series connection between resistors R1, R2, and R3), we see how the voltages measure successively larger (albeit negative) magnitudes, because the polarities of the individual voltage drops are in the same orientation (positive left, negative right). The sum of the voltage drops across R1, R2, and R3 equals 45 volts, which is the same as the battery’s output, except that the battery’s polarity is opposite that of the resistor voltage drops (negative left, positive right), so we end up with 0 volts measured across the whole string of components. That we should end up with exactly 0 volts across the whole string should be no mystery, either. Looking at the circuit, we can see that the far left of the string (left side of R1: point number 2) is directly connected to the far right of the string (right side of battery: point number 2), as necessary to complete the circuit. Since these two points are directly connected, they are electrically common to each other. And, as such, the voltage between those two electrically common points must be zero. Demonstrating Kirchhoff’s Voltage Law in a Parallel Circuit Kirchhoff’s Voltage Law (sometimes denoted as KVL for short) will work for any circuit configuration at all, not just simple series. Note how it works for this parallel circuit: Being a parallel circuit, the voltage across every resistor is the same as the supply voltage: 6 volts. Tallying up voltages around loop 2-3-4-5-6-7-2, we get: Note how I label the final (sum) voltage as E2-2. Since we began our loop-stepping sequence at point 2 and ended at point 2, the algebraic sum of those voltages will be the same as the voltage measured between the same point (E2-2), which of course must be zero. The Validity of Kirchhoff’s Voltage Law, Regardless of Circuit Topology The fact that this circuit is parallel instead of series has nothing to do with the validity of Kirchhoff’s Voltage Law. For that matter, the circuit could be a “black box”—its component configuration completely hidden from our view, with only a set of exposed terminals for us to measure voltage between—and KVL would still hold true: Try any order of steps from any terminal in the above diagram, stepping around back to the original terminal, and you’ll find that the algebraic sum of the voltages always equals zero. Furthermore, the “loop” we trace for KVL doesn’t even have to be a real current path in the closed-circuit sense of the word. All we have to do to comply with KVL is to begin and end at the same point in the circuit, tallying voltage drops and polarities as we go between the next and the last point. Consider this absurd example, tracing “loop” 2-3-6-3-2 in the same parallel resistor circuit: KVL can be used to determine an unknown voltage in a complex circuit, where all other voltages around a particular “loop” are known. Take the following complex circuit (actually two series circuits joined by a single wire at the bottom) as an example: To make the problem simpler, I’ve omitted resistance values and simply given voltage drops across each resistor. The two series circuits share a common wire between them (wire 7-8-9-10), making voltage measurements between the two circuits possible. If we wanted to determine the voltage between points 4 and 3, we could set up a KVL equation with the voltage between those points as the unknown: Stepping around the loop 3-4-9-8-3, we write the voltage drop figures as a digital voltmeter would register them, measuring with the red test lead on the point ahead and black test lead on the point behind as we progress around the loop. Therefore, the voltage from point 9 to point 4 is a positive (+) 12 volts because the “red lead” is on point 9 and the “black lead” is on point 4. The voltage from point 3 to point 8 is a positive (+) 20 volts because the “red lead” is on point 3 and the “black lead” is on point 8. The voltage from point 8 to point 9 is zero, of course, because those two points are electrically common. Our final answer for the voltage from point 4 to point 3 is a negative (-) 32 volts, telling us that point 3 is actually positive with respect to point 4, precisely what a digital voltmeter would indicate with the red lead on point 4 and the black lead on point 3: In other words, the initial placement of our “meter leads” in this KVL problem was “backwards.” Had we generated our KVL equation starting with E3-4 instead of E4-3, stepping around the same loop with the opposite meter lead orientation, the final answer would have been E3-4 = +32 volts: It is important to realize that neither approach is “wrong.” In both cases, we arrive at the correct assessment of voltage between the two points, 3 and 4: point 3 is positive with respect to point 4, and the voltage between them is 32 volts. Review • Kirchhoff’s Voltage Law (KVL): “The algebraic sum of all voltages in a loop must equal zero”
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/06%3A_Divider_Circuits_and_Kirchhoff's_Laws/6.02%3A_Kirchhoffs_Voltage_Law_%28KVL%29.txt
A parallel circuit is often called a current divider for its ability to proportion—or divide—the total current into fractional parts To understand what this means, let’s first analyze a simple parallel circuit, determining the branch currents through individual resistors: Knowing that voltages across all components in a parallel circuit are the same, we can fill in our voltage/current/resistance table with 6 volts across the top row: Using Ohm’s Law (I=E/R) we can calculate each branch current: Knowing that branch currents add up in parallel circuits to equal the total current, we can arrive at total current by summing 6 mA, 2 mA, and 3 mA: The final step, of course, is to figure total resistance. This can be done with Ohm’s Law (R=E/I) in the “total” column, or with the parallel resistance formula from individual resistances. Either way, we’ll get the same answer: Once again, it should be apparent that the current through each resistor is related to its resistance, given that the voltage across all resistors is the same. Rather than being directly proportional, the relationship here is one of inverse proportion. For example, the current through R1 is twice as much as the current through R3, which has twice the resistance of R1. If we were to change the supply voltage of this circuit, we find that (surprise!) these proportional ratios do not change: The current through R1 is still exactly twice that of R3, despite the fact that the source voltage has changed. The proportionality between different branch currents is strictly a function of resistance. Also reminiscent of voltage dividers is the fact that branch currents are fixed proportions of the total current. Despite the fourfold increase in supply voltage, the ratio between any branch current and the total current remains unchanged: Now we can see for ourselves the point we made at the beginning of this page: A parallel circuit is often called a current divider for its ability to proportion—or divide—the total current into fractional parts. The Current Divider Formula With a little bit of algebra, we can derive a formula for determining parallel resistor current given nothing more than total current, individual resistance, and total resistance: The ratio of total resistance to individual resistance is the same ratio as individual (branch) current to total current. This is known as the current divider formula and it is a short-cut method for determining branch currents in a parallel circuit when the total current is known. Current Divider Formula Example Using the original parallel circuit as an example, we can re-calculate the branch currents using this formula, if we start by knowing the total current and total resistance: If you take the time to compare the two divider formulae, you’ll see that they are remarkably similar. Notice, however, that the ratio in the voltage divider formula is Rn (individual resistance) divided by RTotal, and how the ratio in the current divider formula is RTotal divided by Rn: Current Divider Formula vs. Voltage Divider Formula It is quite easy to confuse these two equations, getting the resistance ratios backwards. One way to help remember the proper form is to keep in mind that both ratios in the voltage and current divider equations must equal less than one. After all these are divider equations, not multiplier equations! If the fraction is upside-down, it will provide a ratio greater than one, which is incorrect. Knowing that total resistance in a series (voltage divider) circuit is always greater than any of the individual resistances, we know that the fraction for that formula must be Rn over RTotal. Conversely, knowing that total resistance in a parallel (current divider) circuit is always less then any of the individual resistances, we know that the fraction for that formula must be RTotal over Rn. Current Divider Circuit Example Application: Electric Meter Circuit Current divider circuits find application in electric meter circuits, where a fraction of a measured current is desired to be routed through a sensitive detection device. Using the current divider formula, the proper shunt resistor can be sized to proportion just the right amount of current for the device in any given instance: Current Divider Circuit Review: • Parallel circuits proportion, or “divide,” the total circuit current among individual branch currents, the proportions being strictly dependent upon resistances: In = ITotal (RTotal / Rn)
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/06%3A_Divider_Circuits_and_Kirchhoff's_Laws/6.03%3A_Current_Divider_Circuits_and_the_Current_Divider_Formula.txt
What Is Kirchhoff’s Current Law? Kirchhoff’s Current Law, often shortened to KCL, states that “The algebraic sum of all currents entering and exiting a node must equal zero.” This law is used to describe how a charge enters and leaves a wire junction point or node on a wire. Armed with this information, let’s now take a look at an example of the law in practice, why it’s important, and how it was derived. Parallel Circuit Review Let’s take a closer look at that last parallel example circuit: Solving for all values of voltage and current in this circuit: At this point, we know the value of each branch current and of the total current in the circuit. We know that the total current in a parallel circuit must equal the sum of the branch currents, but there’s more going on in this circuit than just that. Taking a look at the currents at each wire junction point (node) in the circuit, we should be able to see something else: Currents Entering and Exiting a Node At each node on the negative “rail” (wire 8-7-6-5) we have current splitting off the main flow to each successive branch resistor. At each node on the positive “rail” (wire 1-2-3-4) we have current merging together to form the main flow from each successive branch resistor. This fact should be fairly obvious if you think of the water pipe circuit analogy with every branch node acting as a “tee” fitting, the water flow splitting or merging with the main piping as it travels from the output of the water pump toward the return reservoir or sump. If we were to take a closer look at one particular “tee” node, such as node 3, we see that the current entering the node is equal in magnitude to the current exiting the node: From the right and from the bottom, we have two currents entering the wire connection labeled as node 3. To the left, we have a single current exiting the node equal in magnitude to the sum of the two currents entering. To refer to the plumbing analogy: so long as there are no leaks in the piping, what flow enters the fitting must also exit the fitting. This holds true for any node (“fitting”), no matter how many flows are entering or exiting. Mathematically, we can express this general relationship as such: Kirchhoff’s Current Law Mr. Kirchhoff decided to express the above equation in a slightly different form (though mathematically equivalent), calling it Kirchhoff’s Current Law (KCL): Summarized in a phrase, Kirchhoff’s Current Law reads as such: “The algebraic sum of all currents entering and exiting a node must equal zero.” That is, if we assign a mathematical sign (polarity) to each current, denoting whether they enter (+) or exit (-) a node, we can add them together to arrive at a total of zero, guaranteed. Taking our example node (number 3), we can determine the magnitude of the current exiting from the left by setting up a KCL equation with that current as the unknown value: The negative (-) sign on the value of 5 milliamps tells us that the current is exiting the node, as opposed to the 2 milliamp and 3 milliamp currents, which must both be positive (and therefore entering the node). Whether negative or positive denotes current entering or exiting is entirely arbitrary, so long as they are opposite signs for opposite directions and we stay consistent in our notation, KCL will work. Together, Kirchhoff’s Voltage and Current Laws are a formidable pair of tools useful in analyzing electric circuits. Their usefulness will become all the more apparent in a later chapter (“Network Analysis”), but suffice it to say that these Laws deserve to be memorized by the electronics student every bit as much as Ohm’s Law. REVIEW • Kirchhoff’s Current Law (KCL): “The algebraic sum of all currents entering and exiting a node must equal zero”
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/06%3A_Divider_Circuits_and_Kirchhoff's_Laws/6.04%3A_Kirchhoffs_Current_Law_%28KCL%29.txt
With simple series circuits, all components are connected end-to-end to form only one path for electrons to flow through the circuit: With simple parallel circuits, all components are connected between the same two sets of electrically common points, creating multiple paths for electrons to flow from one end of the battery to the other: With each of these two basic circuit configurations, we have specific sets of rules describing voltage, current, and resistance relationships. • Series Circuits: • Voltage drops add to equal total voltage. • All components share the same (equal) current. • Resistances add to equal total resistance. • Parallel Circuits: • All components share the same (equal) voltage. • Branch currents add to equal total current. • Resistances diminish to equal total resistance. However, if circuit components are series-connected in some parts and parallel in others, we won’t be able to apply a single set of rules to every part of that circuit. Instead, we will have to identify which parts of that circuit are series and which parts are parallel, then selectively apply series and parallel rules as necessary to determine what is happening. Take the following circuit, for instance: This circuit is neither simple series nor simple parallel. Rather, it contains elements of both. The current exits the bottom of the battery, splits up to travel through R3 and R4, rejoins, then splits up again to travel through R1 and R2, then rejoins again to return to the top of the battery. There exists more than one path for current to travel (not series), yet there are more than two sets of electrically common points in the circuit (not parallel). Because the circuit is a combination of both series and parallel, we cannot apply the rules for voltage, current, and resistance “across the table” to begin analysis like we could when the circuits were one way or the other. For instance, if the above circuit were simple series, we could just add up R1 through R4 to arrive at a total resistance, solve for total current, and then solve for all voltage drops. Likewise, if the above circuit were simple parallel, we could just solve for branch currents, add up branch currents to figure the total current, and then calculate total resistance from total voltage and total current. However, this circuit’s solution will be more complex. The table will still help us manage the different values for series-parallel combination circuits, but we’ll have to be careful how and where we apply the different rules for series and parallel. Ohm’s Law, of course, still works just the same for determining values within a vertical column in the table. If we are able to identify which parts of the circuit are series and which parts are parallel, we can analyze it in stages, approaching each part one at a time, using the appropriate rules to determine the relationships of voltage, current, and resistance. The rest of this chapter will be devoted to showing you techniques for doing this. Review • The rules of series and parallel circuits must be applied selectively to circuits containing both types of interconnections.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/07%3A_Series-parallel_Combination_Circuits/7.01%3A_What_is_a_Series-Parallel_Circuit.txt
Process of Series-Parallel Resistor Circuit Analysis The goal of series-parallel resistor circuit analysis is to be able to determine all voltage drops, currents, and power dissipations in a circuit. The general strategy to accomplish this goal is as follows: • Step 1: Assess which resistors in a circuit are connected together in simple series or simple parallel. • Step 2: Re-draw the circuit, replacing each of those series or parallel resistor combinations identified in step 1 with a single, equivalent-value resistor. If using a table to manage variables, make a new table column for each resistance equivalent. • Step 3: Repeat steps 1 and 2 until the entire circuit is reduced to one equivalent resistor. • Step 4: Calculate total current from total voltage and total resistance (I=E/R). • Step 5: Taking total voltage and total current values, go back to last step in the circuit reduction process and insert those values where applicable. • Step 6: From known resistances and total voltage / total current values from step 5, use Ohm’s Law to calculate unknown values (voltage or current) (E=IR or I=E/R). • Step 7: Repeat steps 5 and 6 until all values for voltage and current are known in the original circuit configuration. Essentially, you will proceed step-by-step from the simplified version of the circuit back into its original, complex form, plugging in values of voltage and current where appropriate until all values of voltage and current are known. • Step 8: Calculate power dissipations from known voltage, current, and/or resistance values. This may sound like an intimidating process, but its much easier understood through example than through description. In the example circuit above, R1 and R2 are connected in a simple parallel arrangement, as are R3 and R4. Having been identified, these sections need to be converted into equivalent single resistors, and the circuit re-drawn: The double slash (//) symbols represent “parallel” to show that the equivalent resistor values were calculated using the 1/(1/R) formula. The 71.429 Ω resistor at the top of the circuit is the equivalent of R1and R2 in parallel with each other. The 127.27 Ω resistor at the bottom is the equivalent of R3 and R4 in parallel with each other. Our table can be expanded to include these resistor equivalents in their own columns: It should be apparent now that the circuit has been reduced to a simple series configuration with only two (equivalent) resistances. The final step in reduction is to add these two resistances to come up with a total circuit resistance. When we add those two equivalent resistances, we get a resistance of 198.70 Ω. Now, we can re-draw the circuit as a single equivalent resistance and add the total resistance figure to the rightmost column of our table. Note that the “Total” column has been relabeled (R1//R2—R3//R4) to indicate how it relates electrically to the other columns of figures. The “—” symbol is used here to represent “series,” just as the “//” symbol is used to represent “parallel.” Now, total circuit current can be determined by applying Ohm’s Law (I=E/R) to the “Total” column in the table: Back to our equivalent circuit drawing, our total current value of 120.78 milliamps is shown as the only current here: Now we start to work backwards in our progression of circuit re-drawings to the original configuration. The next step is to go to the circuit where R1//R2 and R3//R4 are in series: Since R1//R2 and R3//R4 are in series with each other, the current through those two sets of equivalent resistances must be the same. Furthermore, the current through them must be the same as the total current, so we can fill in our table with the appropriate current values, simply copying the current figure from the Total column to the R1//R2 and R3//R4 columns: Now, knowing the current through the equivalent resistors R1//R2 and R3//R4, we can apply Ohm’s Law (E=IR) to the two right vertical columns to find voltage drops across them: Because we know R1//R2 and R3//R4 are parallel resistor equivalents, and we know that voltage drops in parallel circuits are the same, we can transfer the respective voltage drops to the appropriate columns on the table for those individual resistors. In other words, we take another step backwards in our drawing sequence to the original configuration, and complete the table accordingly: Finally, the original section of the table (columns R1 through R4) is complete with enough values to finish. Applying Ohm’s Law to the remaining vertical columns (I=E/R), we can determine the currents through R1, R2, R3, and R4 individually: Placing Voltage and Current Values into Diagrams Having found all voltage and current values for this circuit, we can show those values in the schematic diagram as such: As a final check of our work, we can see if the calculated current values add up as they should to the total. Since R1 and R2 are in parallel, their combined currents should add up to the total of 120.78 mA. Likewise, since R3 and R4 are in parallel, their combined currents should also add up to the total of 120.78 mA. You can check for yourself to verify that these figures do add up as expected. A computer simulation can also be used to verify the accuracy of these figures. The following SPICE analysis will show all resistor voltages and currents (note the current-sensing vi1, vi2, . . . “dummy” voltage sources in series with each resistor in the netlist, necessary for the SPICE computer program to track current through each path). These voltage sources will be set to have values of zero volts each so they will not affect the circuit in any way. I’ve annotated SPICE’s output figures to make them more readable, denoting which voltage and current figures belong to which resistors. As you can see, all the figures do agree with our calculated values. Review • To analyze a series-parallel combination circuit, follow these steps: • Reduce the original circuit to a single equivalent resistor, re-drawing the circuit in each step of reduction as simple series and simple parallel parts are reduced to single, equivalent resistors. • Solve for total resistance. • Solve for total current (I=E/R). • Determine equivalent resistor voltage drops and branch currents one stage at a time, working backwards to the original circuit configuration again.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/07%3A_Series-parallel_Combination_Circuits/7.02%3A_Analysis_Techniques_for_Series_Parallel_Resistor_Circuits.txt
Typically, complex circuits are not arranged in nice, neat, clean schematic diagrams for us to follow. They are often drawn in such a way that makes it difficult to follow which components are in series and which are in parallel with each other. The purpose of this section is to show you a method useful for re-drawing circuit schematics in a neat and orderly fashion. Like the stage-reduction strategy for solving series-parallel combination circuits, it is a method easier demonstrated than described. Let’s start with the following (convoluted) circuit diagram. Perhaps this diagram was originally drawn this way by a technician or engineer. Perhaps it was sketched as someone traced the wires and connections of a real circuit. In any case, here it is in all its ugliness: With electric circuits and circuit diagrams, the length and routing of wire connecting components in a circuit matters little. (Actually, in some AC circuits it becomes critical, and very long wire lengths can contribute unwanted resistance to both AC and DC circuits, but in most cases wire length is irrelevant.) What this means for us is that we can lengthen, shrink, and/or bend connecting wires without affecting the operation of our circuit. The strategy I have found easiest to apply is to start by tracing the current from one terminal of the battery around to the other terminal, following the loop of components closest to the battery and ignoring all other wires and components for the time being. While tracing the path of the loop, mark each resistor with the appropriate polarity for voltage drop. In this case, I’ll begin my tracing of this circuit at the negative terminal of the battery and finish at the positive terminal, in the same general direction as the electrons would flow. When tracing this direction, I will mark each resistor with the polarity of negative on the entering side and positive on the exiting side, for that is how the actual polarity will be as electrons (negative in charge) enter and exit a resistor: Any components encountered along this short loop are drawn vertically in order: Now, proceed to trace any loops of components connected around components that were just traced. In this case, there’s a loop around R1 formed by R2, and another loop around R3 formed by R4: Tracing those loops, I draw R2 and R4 in parallel with R1 and R3 (respectively) on the vertical diagram. Noting the polarity of voltage drops across R3 and R1, I mark R4 and R2 likewise: Now we have a circuit that is very easily understood and analyzed. In this case, it is identical to the four-resistor series-parallel configuration we examined earlier in the chapter. Let’s look at another example, even uglier than the one before: The first loop I’ll trace is from the negative (-) side of the battery, through R6, through R1, and back to the positive (+) end of the battery: Re-drawing vertically and keeping track of voltage drop polarities along the way, our equivalent circuit starts out looking like this: Next, we can proceed to follow the next loop around one of the traced resistors (R6), in this case, the loop formed by R5 and R7. As before, we start at the negative end of R6 and proceed to the positive end of R6, marking voltage drop polarities across R7 and R5 as we go: Now we add the R5—R7 loop to the vertical drawing. Notice how the voltage drop polarities across R7 and R5 correspond with that of R6, and how this is the same as what we found tracing R7 and R5 in the original circuit: We repeat the process again, identifying and tracing another loop around an already-traced resistor. In this case, the R3—R4 loop around R5 looks like a good loop to trace next: Adding the R3—R4 loop to the vertical drawing, marking the correct polarities as well: With only one remaining resistor left to trace, then next step is obvious: trace the loop formed by R2 around R3: Adding R2 to the vertical drawing, and we’re finished! The result is a diagram that’s very easy to understand compared to the original: This simplified layout greatly eases the task of determining where to start and how to proceed in reducing the circuit down to a single equivalent (total) resistance. Notice how the circuit has been re-drawn, all we have to do is start from the right-hand side and work our way left, reducing simple-series and simple-parallel resistor combinations one group at a time until we’re done. In this particular case, we would start with the simple parallel combination of R2 and R3, reducing it to a single resistance. Then, we would take that equivalent resistance (R2//R3) and the one in series with it (R4), reducing them to another equivalent resistance (R2//R3—R4). Next, we would proceed to calculate the parallel equivalent of that resistance (R2//R3—R4) with R5, then in series with R7, then in parallel with R6, then in series with R1 to give us a grand total resistance for the circuit as a whole. From there we could calculate total current from total voltage and total resistance (I=E/R), then “expand” the circuit back into its original form one stage at a time, distributing the appropriate values of voltage and current to the resistances as we go. Review • Wires in diagrams and in real circuits can be lengthened, shortened, and/or moved without affecting circuit operation. • To simplify a convoluted circuit schematic, follow these steps: • Trace current from one side of the battery to the other, following any single path (“loop”) to the battery. Sometimes it works better to start with the loop containing the most components, but regardless of the path taken the result will be accurate. Mark polarity of voltage drops across each resistor as you trace the loop. Draw those components you encounter along this loop in a vertical schematic. • Mark traced components in the original diagram and trace remaining loops of components in the circuit. Use polarity marks across traced components as guides for what connects where. Document new components in loops on the vertical re-draw schematic as well. • Repeat last step as often as needed until all components in original diagram have been traced.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/07%3A_Series-parallel_Combination_Circuits/7.03%3A_Re-drawing_Complex_Schematics.txt
“I consider that I understand an equation when I can predict the properties of its solutions, without actually solving it.” —P.A.M Dirac, physicist There is a lot of truth to that quote from Dirac. With a little modification, I can extend his wisdom to electric circuits by saying, “I consider that I understand a circuit when I can predict the approximate effects of various changes made to it without actually performing any calculations.” At the end of the series and parallel circuits chapter, we briefly considered how circuits could be analyzed in a qualitative rather than quantitative manner. Building this skill is an important step towards becoming a proficient troubleshooter of electric circuits. Once you have a thorough understanding of how any particular failure will affect a circuit (i.e. you don’t have to perform any arithmetic to predict the results), it will be much easier to work the other way around: pinpointing the source of trouble by assessing how a circuit is behaving. Also shown at the end of the series and parallel circuits chapter was how the table method works just as well for aiding failure analysis as it does for the analysis of healthy circuits. We may take this technique one step further and adapt it for total qualitative analysis. By “qualitative” I mean working with symbols representing “increase,” “decrease,” and “same” instead of precise numerical figures. We can still use the principles of series and parallel circuits, and the concepts of Ohm’s Law, we’ll just use symbolic qualities instead of numerical quantities. By doing this, we can gain more of an intuitive “feel” for how circuits work rather than leaning on abstract equations, attaining Dirac’s definition of “understanding.” Enough talk. Let’s try this technique on a real circuit example and see how it works: This is the first “convoluted” circuit we straightened out for analysis in the last section. Since you already know how this particular circuit reduces to series and parallel sections, I’ll skip the process and go straight to the final form: R3 and R4 are in parallel with each other; so are R1 and R2. The parallel equivalents of R3//R4 and R1//R2are in series with each other. Expressed in symbolic form, the total resistance for this circuit is as follows: RTotal = (R1//R2)—(R3//R4) First, we need to formulate a table with all the necessary rows and columns for this circuit: Next, we need a failure scenario. Let’s suppose that resistor R2 were to fail shorted. We will assume that all other components maintain their original values. Because we’ll be analyzing this circuit qualitatively rather than quantitatively, we won’t be inserting any real numbers into the table. For any quantity unchanged after the component failure, we’ll use the word “same” to represent “no change from before.” For any quantity that has changed as a result of the failure, we’ll use a down arrow for “decrease” and an up arrow for “increase.” As usual, we start by filling in the spaces of the table for individual resistances and total voltage, our “given” values: The only “given” value different from the normal state of the circuit is R2, which we said was failed shorted (abnormally low resistance). All other initial values are the same as they were before, as represented by the “same” entries. All we have to do now is work through the familiar Ohm’s Law and series-parallel principles to determine what will happen to all the other circuit values. First, we need to determine what happens to the resistances of parallel subsections R1//R2 and R3//R4. If neither R3 nor R4 have changed in resistance value, then neither will their parallel combination. However, since the resistance of R2 has decreased while R1 has stayed the same, their parallel combination must decrease in resistance as well: Now, we need to figure out what happens to the total resistance. This part is easy: when we’re dealing with only one component change in the circuit, the change in total resistance will be in the same direction as the change of the failed component. This is not to say that the magnitude of change between individual component and total circuit will be the same, merely the direction of change. In other words, if any single resistor decreases in value, then the total circuit resistance must also decrease, and vice versa. In this case, since R2 is the only failed component, and its resistance has decreased, the total resistance must decrease: Now we can apply Ohm’s Law (qualitatively) to the Total column in the table. Given the fact that total voltage has remained the same and total resistance has decreased, we can conclude that total current must increase (I=E/R). In case you’re not familiar with the qualitative assessment of an equation, it works like this. First, we write the equation as solved for the unknown quantity. In this case, we’re trying to solve for current, given voltage and resistance: Now that our equation is in the proper form, we assess what change (if any) will be experienced by “I,” given the change(s) to “E” and “R”: If the denominator of a fraction decreases in value while the numerator stays the same, then the overall value of the fraction must increase: Therefore, Ohm’s Law (I=E/R) tells us that the current (I) will increase. We’ll mark this conclusion in our table with an “up” arrow: With all resistance places filled in the table and all quantities determined in the Total column, we can proceed to determine the other voltages and currents. Knowing that the total resistance in this table was the result of R1//R2 and R3//R4 in series, we know that the value of total current will be the same as that in R1//R2 and R3//R4 (because series components share the same current). Therefore, if total current increased, then current through R1//R2 and R3//R4 must also have increased with the failure of R2: Fundamentally, what we’re doing here with a qualitative usage of Ohm’s Law and the rules of series and parallel circuits is no different from what we’ve done before with numerical figures. In fact, its a lot easier because you don’t have to worry about making an arithmetic or calculator keystroke error in a calculation. Instead, you’re just focusing on the principles behind the equations. From our table above, we can see that Ohm’s Law should be applicable to the R1//R2 and R3//R4 columns. For R3//R4, we figure what happens to the voltage, given an increase in current and no change in resistance. Intuitively, we can see that this must result in an increase in voltage across the parallel combination of R3//R4: But how do we apply the same Ohm’s Law formula (E=IR) to the R1//R2 column, where we have resistance decreasing and current increasing? It’s easy to determine if only one variable is changing, as it was with R3//R4, but with two variables moving around and no definite numbers to work with, Ohm’s Law isn’t going to be much help. However, there is another rule we can apply horizontally to determine what happens to the voltage across R1//R2: the rule for voltage in series circuits. If the voltages across R1//R2 and R3//R4 add up to equal the total (battery) voltage and we know that the R3//R4 voltage has increased while total voltage has stayed the same, then the voltage across R1//R2 must have decreased with the change of R2‘s resistance value: Now we’re ready to proceed to some new columns in the table. Knowing that R3 and R4 comprise the parallel subsection R3//R4, and knowing that voltage is shared equally between parallel components, the increase in voltage seen across the parallel combination R3//R4 must also be seen across R3 and R4individually: The same goes for R1 and R2. The voltage decrease seen across the parallel combination of R1 and R2 will be seen across R1 and R2 individually: Applying Ohm’s Law vertically to those columns with unchanged (“same”) resistance values, we can tell what the current will do through those components. Increased voltage across an unchanged resistance leads to increased current. Conversely, decreased voltage across an unchanged resistance leads to decreased current: Once again we find ourselves in a position where Ohm’s Law can’t help us: for R2, both voltage and resistance have decreased, but without knowing how much each one has changed, we can’t use the I=E/R formula to qualitatively determine the resulting change in current. However, we can still apply the rules of series and parallel circuits horizontally. We know that the current through the R1//R2 parallel combination has increased, and we also know that the current through R1 has decreased. One of the rules of parallel circuits is that total current is equal to the sum of the individual branch currents. In this case, the current through R1//R2 is equal to the current through R1 added to the current through R2. If current through R1//R2has increased while current through R1 has decreased, current through R2 must have increased: And with that, our table of qualitative values stands completed. This particular exercise may look laborious due to all the detailed commentary, but the actual process can be performed very quickly with some practice. An important thing to realize here is that the general procedure is little different from quantitative analysis: start with the known values, then proceed to determining total resistance, then total current, then transfer figures of voltage and current as allowed by the rules of series and parallel circuits to the appropriate columns. A few general rules can be memorized to assist and/or to check your progress when proceeding with such an analysis: • For any single component failure (open or shorted), the total resistance will always change in the same direction (either increase or decrease) as the resistance change of the failed component. • When a component fails shorted, its resistance always decreases. Also, the current through it will increase, and the voltage across it may drop. I say “may” because in some cases it will remain the same (case in point: a simple parallel circuit with an ideal power source). • When a component fails open, its resistance always increases. The current through that component will decrease to zero, because it is an incomplete electrical path (no continuity). This may result in an increase of voltage across it. The same exception stated above applies here as well: in a simple parallel circuit with an ideal voltage source, the voltage across an open-failed component will remain unchanged.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/07%3A_Series-parallel_Combination_Circuits/7.04%3A_Component_Failure_Analysis_%28Continued%29.txt
Once again, when building battery/resistor circuits, the student or hobbyist is faced with several different modes of construction. Perhaps the most popular is the solderless breadboard: a platform for constructing temporary circuits by plugging components and wires into a grid of interconnected points. A breadboard appears to be nothing but a plastic frame with hundreds of small holes in it. Underneath each hole, though, is a spring clip which connects to other spring clips beneath other holes. The connection pattern between holes is simple and uniform: Suppose we wanted to construct the following series-parallel combination circuit on a breadboard: The recommended way to do so on a breadboard would be to arrange the resistors in approximately the same pattern as seen in the schematic, for ease of relation to the schematic. If 24 volts is required and we only have 6-volt batteries available, four may be connected in series to achieve the same effect: This is by no means the only way to connect these four resistors together to form the circuit shown in the schematic. Consider this alternative layout: If greater permanence is desired without resorting to soldering or wire-wrapping, one could choose to construct this circuit on a terminal strip (also called a barrier strip, or terminal block). In this method, components and wires are secured by mechanical tension underneath screws or heavy clips attached to small metal bars. The metal bars, in turn, are mounted on a nonconducting body to keep them electrically isolated from each other. Building a circuit with components secured to a terminal strip isn’t as easy as plugging components into a breadboard, principally because the components cannot be physically arranged to resemble the schematic layout. Instead, the builder must understand how to “bend” the schematic’s representation into the real-world layout of the strip. Consider one example of how the same four-resistor circuit could be built on a terminal strip: Another terminal strip layout, simpler to understand and relate to the schematic, involves anchoring parallel resistors (R1//R2 and R3//R4) to the same two terminal points on the strip like this: Building more complex circuits on a terminal strip involves the same spatial-reasoning skills, but of course requires greater care and planning. Take for instance this complex circuit, represented in schematic form: The terminal strip used in the prior example barely has enough terminals to mount all seven resistors required for this circuit! It will be a challenge to determine all the necessary wire connections between resistors, but with patience it can be done. First, begin by installing and labeling all resistors on the strip. The original schematic diagram will be shown next to the terminal strip circuit for reference: Next, begin connecting components together wire by wire as shown in the schematic. Over-draw connecting lines in the schematic to indicate completion in the real circuit. Watch this sequence of illustrations as each individual wire is identified in the schematic, then added to the real circuit: Although there are minor variations possible with this terminal strip circuit, the choice of connections shown in this example sequence is both electrically accurate (electrically identical to the schematic diagram) and carries the additional benefit of not burdening any one screw terminal on the strip with more than two wire ends, a good practice in any terminal strip circuit. An example of a “variant” wire connection might be the very last wire added (step 11), which I placed between the left terminal of R2 and the left terminal of R3. This last wire completed the parallel connection between R2 and R3 in the circuit. However, I could have placed this wire instead between the left terminal of R2 and the right terminal of R1, since the right terminal of R1 is already connected to the left terminal of R3(having been placed there in step 9) and so is electrically common with that one point. Doing this, though, would have resulted in three wires secured to the right terminal of R1 instead of two, which is a faux pax in terminal strip etiquette. Would the circuit have worked this way? Certainly! It’s just that more than two wires secured at a single terminal makes for a “messy” connection: one that is aesthetically unpleasing and may place undue stress on the screw terminal. Another variation would be to reverse the terminal connections for resistor R7. As shown in the last diagram, the voltage polarity across R7 is negative on the left and positive on the right (- , +), whereas all the other resistor polarities are positive on the left and negative on the right (+ , -): While this poses no electrical problem, it might cause confusion for anyone measuring resistor voltage drops with a voltmeter, especially an analog voltmeter which will “peg” downscale when subjected to a voltage of the wrong polarity. For the sake of consistency, it might be wise to arrange all wire connections so that all resistor voltage drop polarities are the same, like this: Though electrons do not care about such consistency in component layout, people do. This illustrates an important aspect of any engineering endeavor: the human factor. Whenever a design may be modified for easier comprehension and/or easier maintenance—with no sacrifice of functional performance—it should be done so. Review • Circuits built on terminal strips can be difficult to lay out, but when built they are robust enough to be considered permanent, yet easy to modify. • It is bad practice to secure more than two wire ends and/or component leads under a single terminal screw or clip on a terminal strip. Try to arrange connecting wires so as to avoid this condition. • Whenever possible, build your circuits with clarity and ease of understanding in mind. Even though component and wiring layout is usually of little consequence in DC circuit function, it matters significantly for the sake of the person who has to modify or troubleshoot it later.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/07%3A_Series-parallel_Combination_Circuits/7.05%3A_Building_Series-Parallel_Resistor_Circuits.txt
A meter is any device built to accurately detect and display an electrical quantity in a form readable by a human being. Usually this “readable form” is visual: motion of a pointer on a scale, a series of lights arranged to form a “bargraph,” or some sort of display composed of numerical figures. In the analysis and testing of circuits, there are meters designed to accurately measure the basic quantities of voltage, current, and resistance. There are many other types of meters as well, but this chapter primarily covers the design and operation of the basic three. Most modern meters are “digital” in design, meaning that their readable display is in the form of numerical digits. Older designs of meters are mechanical in nature, using some kind of pointer device to show quantity of measurement. In either case, the principles applied in adapting a display unit to the measurement of (relatively) large quantities of voltage, current, or resistance are the same. The display mechanism of a meter is often referred to as a movement, borrowing from its mechanical nature to move a pointer along a scale so that a measured value may be read. Though modern digital meters have no moving parts, the term “movement” may be applied to the same basic device performing the display function. The design of digital “movements” is beyond the scope of this chapter, but mechanical meter movement designs are very understandable. Most mechanical movements are based on the principle of electromagnetism: that electric current through a conductor produces a magnetic field perpendicular to the axis of electron flow. The greater the electric current, the stronger the magnetic field produced. If the magnetic field formed by the conductor is allowed to interact with another magnetic field, a physical force will be generated between the two sources of fields. If one of these sources is free to move with respect to the other, it will do so as current is conducted through the wire, the motion (usually against the resistance of a spring) being proportional to strength of current. The first meter movements built were known as galvanometers, and were usually designed with maximum sensitivity in mind. A very simple galvanometer may be made from a magnetized needle (such as the needle from a magnetic compass) suspended from a string, and positioned within a coil of wire. Current through the wire coil will produce a magnetic field which will deflect the needle from pointing in the direction of earth’s magnetic field. An antique string galvanometer is shown in the following photograph: Such instruments were useful in their time, but have little place in the modern world except as proof-of-concept and elementary experimental devices. They are highly susceptible to motion of any kind, and to any disturbances in the natural magnetic field of the earth. Now, the term “galvanometer” usually refers to any design of electromagnetic meter movement built for exceptional sensitivity, and not necessarily a crude device such as that shown in the photograph. Practical electromagnetic meter movements can be made now where a pivoting wire coil is suspended in a strong magnetic field, shielded from the majority of outside influences. Such an instrument design is generally known as a permanent-magnet, moving coil, or PMMCmovement: In the picture above, the meter movement “needle” is shown pointing somewhere around 35 percent of full-scale, zero being full to the left of the arc and full-scale being completely to the right of the arc. An increase in measured current will drive the needle to point further to the right and a decrease will cause the needle to drop back down toward its resting point on the left. The arc on the meter display is labeled with numbers to indicate the value of the quantity being measured, whatever that quantity is. In other words, if it takes 50 microamps of current to drive the needle fully to the right (making this a “50 µA full-scale movement”), the scale would have 0 µA written at the very left end and 50 µA at the very right, 25 µA being marked in the middle of the scale. In all likelihood, the scale would be divided into much smaller graduating marks, probably every 5 or 1 µA, to allow whoever is viewing the movement to infer a more precise reading from the needle’s position. The meter movement will have a pair of metal connection terminals on the back for current to enter and exit. Most meter movements are polarity-sensitive, one direction of current driving the needle to the right and the other driving it to the left. Some meter movements have a needle that is spring-centered in the middle of the scale sweep instead of to the left, thus enabling measurements of either polarity: Common polarity-sensitive movements include the D’Arsonval and Weston designs, both PMMC-type instruments. Current in one direction through the wire will produce a clockwise torque on the needle mechanism, while current the other direction will produce a counter-clockwise torque. Some meter movements are polarity-insensitive, relying on the attraction of an unmagnetized, movable iron vane toward a stationary, current-carrying wire to deflect the needle. Such meters are ideally suited for the measurement of alternating current (AC). A polarity-sensitive movement would just vibrate back and forth uselessly if connected to a source of AC. While most mechanical meter movements are based on electromagnetism (electron flow through a conductor creating a perpendicular magnetic field), a few are based on electrostatics: that is, the attractive or repulsive force generated by electric charges across space. This is the same phenomenon exhibited by certain materials (such as wax and wool) when rubbed together. If a voltage is applied between two conductive surfaces across an air gap, there will be a physical force attracting the two surfaces together capable of moving some kind of indicating mechanism. That physical force is directly proportional to the voltage applied between the plates, and inversely proportional to the square of the distance between the plates. The force is also irrespective of polarity, making this a polarity-insensitive type of meter movement: Unfortunately, the force generated by the electrostatic attraction is very small for common voltages. In fact, it is so small that such meter movement designs are impractical for use in general test instruments. Typically, electrostatic meter movements are used for measuring very high voltages (many thousands of volts). One great advantage of the electrostatic meter movement, however, is the fact that it has extremely high resistance, whereas electromagnetic movements (which depend on the flow of electrons through wire to generate a magnetic field) are much lower in resistance. As we will see in greater detail to come, greater resistance (resulting in less current drawn from the circuit under test) makes for a better voltmeter. A much more common application of electrostatic voltage measurement is seen in a device known as a Cathode Ray Tube, or CRT. These are special glass tubes, very similar to television viewscreen tubes. In the cathode ray tube, a beam of electrons traveling in a vacuum are deflected from their course by voltage between pairs of metal plates on either side of the beam. Because electrons are negatively charged, they tend to be repelled by the negative plate and attracted to the positive plate. A reversal of voltage polarity across the two plates will result in a deflection of the electron beam in the opposite direction, making this type of meter “movement” polarity-sensitive: The electrons, having much less mass than metal plates, are moved by this electrostatic force very quickly and readily. Their deflected path can be traced as the electrons impinge on the glass end of the tube where they strike a coating of phosphorus chemical, emitting a glow of light seen outside of the tube. The greater the voltage between the deflection plates, the further the electron beam will be “bent” from its straight path, and the further the glowing spot will be seen from center on the end of the tube. A photograph of a CRT is shown here: In a real CRT, as shown in the above photograph, there are two pairs of deflection plates rather than just one. In order to be able to sweep the electron beam around the whole area of the screen rather than just in a straight line, the beam must be deflected in more than one dimension. Although these tubes are able to accurately register small voltages, they are bulky and require electrical power to operate (unlike electromagnetic meter movements, which are more compact and actuated by the power of the measured signal current going through them). They are also much more fragile than other types of electrical metering devices. Usually, cathode ray tubes are used in conjunction with precise external circuits to form a larger piece of test equipment known as an oscilloscope, which has the ability to display a graph of voltage over time, a tremendously useful tool for certain types of circuits where voltage and/or current levels are dynamically changing. Whatever the type of meter or size of meter movement, there will be a rated value of voltage or current necessary to give full-scale indication. In electromagnetic movements, this will be the “full-scale deflection current” necessary to rotate the needle so that it points to the exact end of the indicating scale. In electrostatic movements, the full-scale rating will be expressed as the value of voltage resulting in the maximum deflection of the needle actuated by the plates, or the value of voltage in a cathode-ray tube which deflects the electron beam to the edge of the indicating screen. In digital “movements,” it is the amount of voltage resulting in a “full-count” indication on the numerical display: when the digits cannot display a larger quantity. The task of the meter designer is to take a given meter movement and design the necessary external circuitry for full-scale indication at some specified amount of voltage or current. Most meter movements (electrostatic movements excepted) are quite sensitive, giving full-scale indication at only a small fraction of a volt or an amp. This is impractical for most tasks of voltage and current measurement. What the technician often requires is a meter capable of measuring high voltages and currents. By making the sensitive meter movement part of a voltage or current divider circuit, the movement’s useful measurement range may be extended to measure far greater levels than what could be indicated by the movement alone. Precision resistors are used to create the divider circuits necessary to divide voltage or current appropriately. One of the lessons you will learn in this chapter is how to design these divider circuits. Review • A “movement” is the display mechanism of a meter. • Electromagnetic movements work on the principle of a magnetic field being generated by electric current through a wire. Examples of electromagnetic meter movements include the D’Arsonval, Weston, and iron-vane designs. • Electrostatic movements work on the principle of physical force generated by an electric field between two plates. • Cathode Ray Tubes (CRT’s) use an electrostatic field to bend the path of an electron beam, providing indication of the beam’s position by light created when the beam strikes the end of the glass tube.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/08%3A_DC_Metering_Circuits/8.01%3A_What_is_a_Meter.txt
As was stated earlier, most meter movements are sensitive devices. Some D’Arsonval movements have full-scale deflection current ratings as little as 50 µA, with an (internal) wire resistance of less than 1000 Ω. This makes for a voltmeter with a full-scale rating of only 50 millivolts (50 µA X 1000 Ω)! In order to build voltmeters with practical (higher voltage) scales from such sensitive movements, we need to find some way to reduce the measured quantity of voltage down to a level the movement can handle. Let’s start our example problems with a D’Arsonval meter movement having a full-scale deflection rating of 1 mA and a coil resistance of 500 Ω: Using Ohm’s Law (E=IR), we can determine how much voltage will drive this meter movement directly to full scale: E = I R E = (1 mA)(500 Ω) E = 0.5 volts If all we wanted was a meter that could measure 1/2 of a volt, the bare meter movement we have here would suffice. But to measure greater levels of voltage, something more is needed. To get an effective voltmeter meter range in excess of 1/2 volt, we’ll need to design a circuit allowing only a precise proportion of measured voltage to drop across the meter movement. This will extend the meter movement’s range to higher voltages. Correspondingly, we will need to re-label the scale on the meter face to indicate its new measurement range with this proportioning circuit connected. But how do we create the necessary proportioning circuit? Well, if our intention is to allow this meter movement to measure a greater voltage than it does now, what we need is a voltage divider circuit to proportion the total measured voltage into a lesser fraction across the meter movement’s connection points. Knowing that voltage divider circuits are built from series resistances, we’ll connect a resistor in series with the meter movement (using the movement’s own internal resistance as the second resistance in the divider): The series resistor is called a “multiplier” resistor because it multiplies the working range of the meter movement as it proportionately divides the measured voltage across it. Determining the required multiplier resistance value is an easy task if you’re familiar with series circuit analysis. For example, let’s determine the necessary multiplier value to make this 1 mA, 500 Ω movement read exactly full-scale at an applied voltage of 10 volts. To do this, we first need to set up an E/I/R table for the two series components: Knowing that the movement will be at full-scale with 1 mA of current going through it, and that we want this to happen at an applied (total series circuit) voltage of 10 volts, we can fill in the table as such: There are a couple of ways to determine the resistance value of the multiplier. One way is to determine total circuit resistance using Ohm’s Law in the “total” column (R=E/I), then subtract the 500 Ω of the movement to arrive at the value for the multiplier: Another way to figure the same value of resistance would be to determine voltage drop across the movement at full-scale deflection (E=IR), then subtract that voltage drop from the total to arrive at the voltage across the multiplier resistor. Finally, Ohm’s Law could be used again to determine resistance (R=E/I) for the multiplier: Either way provides the same answer (9.5 kΩ), and one method could be used as verification for the other, to check accuracy of work. With exactly 10 volts applied between the meter test leads (from some battery or precision power supply), there will be exactly 1 mA of current through the meter movement, as restricted by the “multiplier” resistor and the movement’s own internal resistance. Exactly 1/2 volt will be dropped across the resistance of the movement’s wire coil, and the needle will be pointing precisely at full-scale. Having re-labeled the scale to read from 0 to 10 V (instead of 0 to 1 mA), anyone viewing the scale will interpret its indication as ten volts. Please take note that the meter user does not have to be aware at all that the movement itself is actually measuring just a fraction of that ten volts from the external source. All that matters to the user is that the circuit as a whole functions to accurately display the total, applied voltage. This is how practical electrical meters are designed and used: a sensitive meter movement is built to operate with as little voltage and current as possible for maximum sensitivity, then it is “fooled” by some sort of divider circuit built of precision resistors so that it indicates full-scale when a much larger voltage or current is impressed on the circuit as a whole. We have examined the design of a simple voltmeter here. Ammeters follow the same general rule, except that parallel-connected “shunt” resistors are used to create a current divider circuit as opposed to the series-connected voltage divider “multiplier” resistors used for voltmeter designs. Generally, it is useful to have multiple ranges established for an electromechanical meter such as this, allowing it to read a broad range of voltages with a single movement mechanism. This is accomplished through the use of a multi-pole switch and several multiplier resistors, each one sized for a particular voltage range: The five-position switch makes contact with only one resistor at a time. In the bottom (full clockwise) position, it makes contact with no resistor at all, providing an “off” setting. Each resistor is sized to provide a particular full-scale range for the voltmeter, all based on the particular rating of the meter movement (1 mA, 500 Ω). The end result is a voltmeter with four different full-scale ranges of measurement. Of course, in order to make this work sensibly, the meter movement’s scale must be equipped with labels appropriate for each range. With such a meter design, each resistor value is determined by the same technique, using a known total voltage, movement full-scale deflection rating, and movement resistance. For a voltmeter with ranges of 1 volt, 10 volts, 100 volts, and 1000 volts, the multiplier resistances would be as follows: Note the multiplier resistor values used for these ranges, and how odd they are. It is highly unlikely that a 999.5 kΩ precision resistor will ever be found in a parts bin, so voltmeter designers often opt for a variation of the above design which uses more common resistor values: With each successively higher voltage range, more multiplier resistors are pressed into service by the selector switch, making their series resistances add for the necessary total. For example, with the range selector switch set to the 1000 volt position, we need a total multiplier resistance value of 999.5 kΩ. With this meter design, that’s exactly what we’ll get: RTotal = R4 + R3 + R2 + R1 RTotal = 900 kΩ + 90 kΩ + 9 kΩ + 500 Ω RTotal = 999.5 kΩ The advantage, of course, is that the individual multiplier resistor values are more common (900k, 90k, 9k) than some of the odd values in the first design (999.5k, 99.5k, 9.5k). From the perspective of the meter user, however, there will be no discernible difference in function. Review • Extended voltmeter ranges are created for sensitive meter movements by adding series “multiplier” resistors to the movement circuit, providing a precise voltage division ratio.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/08%3A_DC_Metering_Circuits/8.02%3A_Voltmeter_Design.txt
Every meter impacts the circuit it is measuring to some extent, just as any tire-pressure gauge changes the measured tire pressure slightly as some air is let out to operate the gauge. While some impact is inevitable, it can be minimized through good meter design. Voltage Divider Circuit Since voltmeters are always connected in parallel with the component or components under test, any current through the voltmeter will contribute to the overall current in the tested circuit, potentially affecting the voltage being measured. A perfect voltmeter has infinite resistance, so that it draws no current from the circuit under test. However, perfect voltmeters only exist in the pages of textbooks, not in real life! Take the following voltage divider circuit as an extreme example of how a realistic voltmeter might impact the circuit its measuring: With no voltmeter connected to the circuit, there should be exactly 12 volts across each 250 MΩ resistor in the series circuit, the two equal-value resistors dividing the total voltage (24 volts) exactly in half. However, if the voltmeter in question has a lead-to-lead resistance of 10 MΩ (a common amount for a modern digital voltmeter), its resistance will create a parallel subcircuit with the lower resistor of the divider when connected: This effectively reduces the lower resistance from 250 MΩ to 9.615 MΩ (250 MΩ and 10 MΩ in parallel), drastically altering voltage drops in the circuit. The lower resistor will now have far less voltage across it than before, and the upper resistor far more. Measured Voltage Divider A voltage divider with resistance values of 250 MΩ and 9.615 MΩ will divide 24 volts into portions of 23.1111 volts and 0.8889 volts, respectively. Since the voltmeter is part of that 9.615 MΩ resistance, that is what it will indicate: 0.8889 volts. Now, the voltmeter can only indicate the voltage its connected across. It has no way of “knowing” there was a potential of 12 volts dropped across the lower 250 MΩ resistor before it was connected across it. The very act of connecting the voltmeter to the circuit makes it part of the circuit, and the voltmeter’s own resistance alters the resistance ratio of the voltage divider circuit, consequently affecting the voltage being measured. Imagine using a tire pressure gauge that took so great a volume of air to operate that it would deflate any tire it was connected to. The amount of air consumed by the pressure gauge in the act of measurement is analogous to the current taken by the voltmeter movement to move the needle. The less air a pressure gauge requires to operate, the less it will deflate the tire under test. The less current drawn by a voltmeter to actuate the needle, the less it will burden the circuit under test. This effect is called loading, and it is present to some degree in every instance of voltmeter usage. The scenario shown here is worst-case, with a voltmeter resistance substantially lower than the resistances of the divider resistors. But there always will be some degree of loading, causing the meter to indicate less than the true voltage with no meter connected. Obviously, the higher the voltmeter resistance, the less loading of the circuit under test, and that is why an ideal voltmeter has infinite internal resistance. Voltmeters with electromechanical movements are typically given ratings in “ohms per volt” of range to designate the amount of circuit impact created by the current draw of the movement. Because such meters rely on different values of multiplier resistors to give different measurement ranges, their lead-to-lead resistances will change depending on what range they’re set to. Digital voltmeters, on the other hand, often exhibit a constant resistance across their test leads regardless of range setting (but not always!), and as such are usually rated simply in ohms of input resistance, rather than “ohms per volt” sensitivity. What “ohms per volt” means is how many ohms of lead-to-lead resistance for every volt of range setting on the selector switch. Let’s take our example voltmeter from the last section as an example: On the 1000 volt scale, the total resistance is 1 MΩ (999.5 kΩ + 500Ω), giving 1,000,000 Ω per 1000 volts of range, or 1000 ohms per volt (1 kΩ/V). This ohms-per-volt “sensitivity” rating remains constant for any range of this meter: The astute observer will notice that the ohms-per-volt rating of any meter is determined by a single factor: the full-scale current of the movement, in this case 1 mA. “Ohms per volt” is the mathematical reciprocal of “volts per ohm,” which is defined by Ohm’s Law as current (I=E/R). Consequently, the full-scale current of the movement dictates the Ω/volt sensitivity of the meter, regardless of what ranges the designer equips it with through multiplier resistors. In this case, the meter movement’s full-scale current rating of 1 mA gives it a voltmeter sensitivity of 1000 Ω/V regardless of how we range it with multiplier resistors. To minimize the loading of a voltmeter on any circuit, the designer must seek to minimize the current draw of its movement. This can be accomplished by re-designing the movement itself for maximum sensitivity (less current required for full-scale deflection), but the tradeoff here is typically ruggedness: a more sensitive movement tends to be more fragile. Another approach is to electronically boost the current sent to the movement, so that very little current needs to be drawn from the circuit under test. This special electronic circuit is known as an amplifier, and the voltmeter thus constructed is an amplified voltmeter. The internal workings of an amplifier are too complex to be discussed at this point, but suffice it to say that the circuit allows the measured voltage to control how much battery current is sent to the meter movement. Thus, the movement’s current needs are supplied by a battery internal to the voltmeter and not by the circuit under test. The amplifier still loads the circuit under test to some degree, but generally hundreds or thousands of times less than the meter movement would by itself. Before the advent of semiconductors known as “field-effect transistors,” vacuum tubes were used as amplifying devices to perform this boosting. Such vacuum-tube voltmeters, or (VTVM’s) were once very popular instruments for electronic test and measurement. Here is a photograph of a very old VTVM, with the vacuum tube exposed! Now, solid-state transistor amplifier circuits accomplish the same task in digital meter designs. While this approach (of using an amplifier to boost the measured signal current) works well, it vastly complicates the design of the meter, making it nearly impossible for the beginning electronics student to comprehend its internal workings. A final, and ingenious, solution to the problem of voltmeter loading is that of the potentiometric or null-balance instrument. It requires no advanced (electronic) circuitry or sensitive devices like transistors or vacuum tubes, but it does require greater technician involvement and skill. In a potentiometric instrument, a precision adjustable voltage source is compared against the measured voltage, and a sensitive device called a null detector is used to indicate when the two voltages are equal. In some circuit designs, a precision potentiometer is used to provide the adjustable voltage, hence the label potentiometric. When the voltages are equal, there will be zero current drawn from the circuit under test, and thus the measured voltage should be unaffected. It is easy to show how this works with our last example, the high-resistance voltage divider circuit: The “null detector” is a sensitive device capable of indicating the presence of very small voltages. If an electromechanical meter movement is used as the null detector, it will have a spring-centered needle that can deflect in either direction so as to be useful for indicating a voltage of either polarity. As the purpose of a null detector is to accurately indicate a condition of zero voltage, rather than to indicate any specific (nonzero) quantity as a normal voltmeter would, the scale of the instrument used is irrelevant. Null detectors are typically designed to be as sensitive as possible in order to more precisely indicate a “null” or “balance” (zero voltage) condition. An extremely simple type of null detector is a set of audio headphones, the speakers within acting as a kind of meter movement. When a DC voltage is initially applied to a speaker, the resulting current through it will move the speaker cone and produce an audible “click.” Another “click” sound will be heard when the DC source is disconnected. Building on this principle, a sensitive null detector may be made from nothing more than headphones and a momentary contact switch: If a set of “8 ohm” headphones are used for this purpose, its sensitivity may be greatly increased by connecting it to a device called a transformer. The transformer exploits principles of electromagnetism to “transform” the voltage and current levels of electrical energy pulses. In this case, the type of transformer used is a step-down transformer, and it converts low-current pulses (created by closing and opening the pushbutton switch while connected to a small voltage source) into higher-current pulses to more efficiently drive the speaker cones inside the headphones. An “audio output” transformer with an impedance ratio of 1000:8 is ideal for this purpose. The transformer also increases detector sensitivity by accumulating the energy of a low-current signal in a magnetic field for sudden release into the headphone speakers when the switch is opened. Thus, it will produce louder “clicks” for detecting smaller signals: Connected to the potentiometric circuit as a null detector, the switch/transformer/headphone arrangement is used as such: The purpose of any null detector is to act like a laboratory balance scale, indicating when the two voltages are equal (absence of voltage between points 1 and 2) and nothing more. The laboratory scale balance beam doesn’t actually weigh anything; rather, it simply indicates equality between the unknown mass and the pile of standard (calibrated) masses. Likewise, the null detector simply indicates when the voltage between points 1 and 2 are equal, which (according to Kirchhoff’s Voltage Law) will be when the adjustable voltage source (the battery symbol with a diagonal arrow going through it) is precisely equal in voltage to the drop across R2. To operate this instrument, the technician would manually adjust the output of the precision voltage source until the null detector indicated exactly zero (if using audio headphones as the null detector, the technician would repeatedly press and release the pushbutton switch, listening for silence to indicate that the circuit was “balanced”), and then note the source voltage as indicated by a voltmeter connected across the precision voltage source, that indication being representative of the voltage across the lower 250 MΩ resistor: The voltmeter used to directly measure the precision source need not have an extremely high Ω/V sensitivity, because the source will supply all the current it needs to operate. So long as there is zero voltage across the null detector, there will be zero current between points 1 and 2, equating to no loading of the divider circuit under test. It is worthy to reiterate the fact that this method, properly executed, places almost zero load upon the measured circuit. Ideally, it places absolutely no load on the tested circuit, but to achieve this ideal goal the null detector would have to have absolutely zero voltage across it, which would require an infinitely sensitive null meter and a perfect balance of voltage from the adjustable voltage source. However, despite its practical inability to achieve absolute zero loading, a potentiometric circuit is still an excellent technique for measuring voltage in high-resistance circuits. And unlike the electronic amplifier solution, which solves the problem with advanced technology, the potentiometric method achieves a hypothetically perfect solution by exploiting a fundamental law of electricity (KVL). Review • An ideal voltmeter has infinite resistance. • Too low of an internal resistance in a voltmeter will adversely affect the circuit being measured. • Vacuum tube voltmeters (VTVM’s), transistor voltmeters, and potentiometric circuits are all means of minimizing the load placed on a measured circuit. Of these methods, the potentiometric (“null-balance”) technique is the only one capable of placing zero load on the circuit. • A null detector is a device built for maximum sensitivity to small voltages or currents. It is used in potentiometric voltmeter circuits to indicate the absence of voltage between two points, thus indicating a condition of balance between an adjustable voltage source and the voltage being measured.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/08%3A_DC_Metering_Circuits/8.03%3A_Voltmeter_Impact_on_Measured_Circuit.txt
Ammeters Measure Electrical Current A meter designed to measure electrical current is popularly called an “ammeter” because the unit of measurement is “amps.” In ammeter designs, external resistors added to extend the usable range of the movement are connected in parallel with the movement rather than in series as is the case for voltmeters. This is because we want to divide the measured current, not the measured voltage, going to the movement and because current divider circuits are always formed by parallel resistances. Designing an Ammeter Taking the same meter movement as the voltmeter example, we can see that it would make a very limited instrument by itself, full-scale deflection occurring at only 1 mA. As is the case with extending a meter movement’s voltage-measuring ability, we would have to correspondingly re-label the movement’s scale so that it read differently for an extended current range. For example, if we wanted to design an ammeter to have a full-scale range of 5 amps using the same meter movement as before (having an intrinsic full-scale range of only 1 mA), we would have to re-label the movement’s scale to read 0 A on the far left and 5 A on the far right, rather than 0 mA to 1 mA as before. Whatever extended range provided by the parallel-connected resistors, we would have to represent graphically on the meter movement face. Using 5 amps as an extended range for our sample movement, let’s determine the amount of parallel resistance necessary to “shunt,” or bypass, the majority of current so that only 1 mA will go through the movement with a total current of 5 A: From our given values of movement current, movement resistance, and total circuit (measured) current, we can determine the voltage across the meter movement (Ohm’s Law applied to the center column, E=IR): Knowing that the circuit formed by the movement and the shunt is of a parallel configuration, we know that the voltage across the movement, shunt, and test leads (total) must be the same: We also know that the current through the shunt must be the difference between the total current (5 amps) and the current through the movement (1 mA), because branch currents add in a parallel configuration: Then, using Ohm’s Law (R=E/I) in the right column, we can determine the necessary shunt resistance: Of course, we could have calculated the same value of just over 100 milli-ohms (100 mΩ) for the shunt by calculating total resistance (R=E/I; 0.5 volts/5 amps = 100 mΩ exactly), then working the parallel resistance formula backwards, but the arithmetic would have been more challenging: An Ammeter in Real-Life Designs In real life, the shunt resistor of an ammeter will usually be encased within the protective metal housing of the meter unit, hidden from sight. Note the construction of the ammeter in the following photograph: This particular ammeter is an automotive unit manufactured by Stewart-Warner. Although the D’Arsonval meter movement itself probably has a full-scale rating in the range of milliamps, the meter as a whole has a range of +/- 60 amps. The shunt resistor providing this high current range is enclosed within the metal housing of the meter. Note also with this particular meter that the needle centers at zero amps and can indicate either a “positive” current or a “negative” current. Connected to the battery charging circuit of an automobile, this meter is able to indicate a charging condition (electrons flowing from generator to battery) or a discharging condition (electrons flowing from battery to the rest of the car’s loads). Increasing an Ammeter’s Usable Range As is the case with multiple-range voltmeters, ammeters can be given more than one usable range by incorporating several shunt resistors switched with a multi-pole switch: Notice that the range resistors are connected through the switch so as to be in parallel with the meter movement, rather than in series as it was in the voltmeter design. The five-position switch makes contact with only one resistor at a time, of course. Each resistor is sized accordingly for a different full-scale range, based on the particular rating of the meter movement (1 mA, 500 Ω). With such a meter design, each resistor value is determined by the same technique, using a known total current, movement full-scale deflection rating, and movement resistance. For an ammeter with ranges of 100 mA, 1 A, 10 A, and 100 A, the shunt resistances would be as such: Notice that these shunt resistor values are very low! 5.00005 mΩ is 5.00005 milli-ohms, or 0.00500005 ohms! To achieve these low resistances, ammeter shunt resistors often have to be custom-made from relatively large-diameter wire or solid pieces of metal. One thing to be aware of when sizing ammeter shunt resistors is the factor of power dissipation. Unlike the voltmeter, an ammeter’s range resistors have to carry large amounts of current. If those shunt resistors are not sized accordingly, they may overheat and suffer damage, or at the very least lose accuracy due to overheating. For the example meter above, the power dissipations at full-scale indication are (the double-squiggly lines represent “approximately equal to” in mathematics): A 1/8 watt resistor would work just fine for R4, a 1/2 watt resistor would suffice for R3 and a 5 watt for R2(although resistors tend to maintain their long-term accuracy better if not operated near their rated power dissipation, so you might want to over-rate resistors R2 and R3), but precision 50 watt resistors are rare and expensive components indeed. A custom resistor made from metal stock or thick wire may have to be constructed for R1 to meet both the requirements of low resistance and high power rating. Sometimes, shunt resistors are used in conjunction with voltmeters of high input resistance to measure current. In these cases, the current through the voltmeter movement is small enough to be considered negligible, and the shunt resistance can be sized according to how many volts or millivolts of drop will be produced per amp of current: If, for example, the shunt resistor in the above circuit were sized at precisely 1 Ω, there would be 1 volt dropped across it for every amp of current through it. The voltmeter indication could then be taken as a direct indication of current through the shunt. For measuring very small currents, higher values of shunt resistance could be used to generate more voltage drop per given unit of current, thus extending the usable range of the (volt)meter down into lower amounts of current. The use of voltmeters in conjunction with low-value shunt resistances for the measurement of current is something commonly seen in industrial applications. Using a Shunt Resistor and a Voltmeter Instead of an Ammeter The use of a shunt resistor along with a voltmeter to measure current can be a useful trick for simplifying the task of frequent current measurements in a circuit. Normally, to measure current through a circuit with an ammeter, the circuit would have to be broken (interrupted) and the ammeter inserted between the separated wire ends, like this: If we have a circuit where current needs to be measured often, or we would just like to make the process of current measurement more convenient, a shunt resistor could be placed between those points and left there permanently, current readings taken with a voltmeter as needed without interrupting continuity in the circuit: Of course, care must be taken in sizing the shunt resistor low enough so that it doesn’t adversely affect the circuit’s normal operation, but this is generally not difficult to do. This technique might also be useful in computer circuit analysis, where we might want to have the computer display current through a circuit in terms of a voltage (with SPICE, this would allow us to avoid the idiosyncrasy of reading negative current values): We would interpret the voltage reading across the shunt resistor (between circuit nodes 1 and 2 in the SPICE simulation) directly as amps, with 7.999E-04 being 0.7999 mA, or 799.9 µA. Ideally, 12 volts applied directly across 15 kΩ would give us exactly 0.8 mA, but the resistance of the shunt lessens that current just a tiny bit (as it would in real life). However, such a tiny error is generally well within acceptable limits of accuracy for either a simulation or a real circuit, and so shunt resistors can be used in all but the most demanding applications for accurate current measurement. Review • Ammeter ranges are created by adding parallel “shunt” resistors to the movement circuit, providing a precise current division. • Shunt resistors may have high power dissipations, so be careful when choosing parts for such meters! • Shunt resistors can be used in conjunction with high-resistance voltmeters as well as low-resistance ammeter movements, producing accurate voltage drops for given amounts of current. Shunt resistors should be selected for as low a resistance value as possible to minimize their impact upon the circuit under test.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/08%3A_DC_Metering_Circuits/8.04%3A_Ammeter_Design.txt
Just like voltmeters, ammeters tend to influence the amount of current in the circuits they’re connected to. However, unlike the ideal voltmeter, the ideal ammeter has zero internal resistance, so as to drop as little voltage as possible as electrons flow through it. Note that this ideal resistance value is exactly opposite as that of a voltmeter. With voltmeters, we want as little current to be drawn as possible from the circuit under test. With ammeters, we want as little voltage to be dropped as possible while conducting current. Here is an extreme example of an ammeter’s effect upon a circuit: With the ammeter disconnected from this circuit, the current through the 3 Ω resistor would be 666.7 mA, and the current through the 1.5 Ω resistor would be 1.33 amps. If the ammeter had an internal resistance of 1/2 Ω, and it were inserted into one of the branches of this circuit, though, its resistance would seriously affect the measured branch current: Having effectively increased the left branch resistance from 3 Ω to 3.5 Ω, the ammeter will read 571.43 mA instead of 666.7 mA. Placing the same ammeter in the right branch would affect the current to an even greater extent: Now the right branch current is 1 amp instead of 1.333 amps, due to the increase in resistance created by the addition of the ammeter into the current path. When using standard ammeters that connect in series with the circuit being measured, it might not be practical or possible to redesign the meter for a lower input (lead-to-lead) resistance. However, if we were selecting a value of shunt resistor to place in the circuit for a current measurement based on voltage drop, and we had our choice of a wide range of resistances, it would be best to choose the lowest practical resistance for the application. Any more resistance than necessary and the shunt may impact the circuit adversely by adding excessive resistance in the current path. One ingenious way to reduce the impact that a current-measuring device has on a circuit is to use the circuit wire as part of the ammeter movement itself. All current-carrying wires produce a magnetic field, the strength of which is in direct proportion to the strength of the current. By building an instrument that measures the strength of that magnetic field, a no-contact ammeter can be produced. Such a meter is able to measure the current through a conductor without even having to make physical contact with the circuit, much less break continuity or insert additional resistance. Ammeters of this design are made, and are called “clamp-on” meters because they have “jaws” which can be opened and then secured around a circuit wire. Clamp-on ammeters make for quick and safe current measurements, especially on high-power industrial circuits. Because the circuit under test has had no additional resistance inserted into it by a clamp-on meter, there is no error induced in taking a current measurement. The actual movement mechanism of a clamp-on ammeter is much the same as for an iron-vane instrument, except that there is no internal wire coil to generate the magnetic field. More modern designs of clamp-on ammeters utilize a small magnetic field detector device called a Hall-effect sensor to accurately determine field strength. Some clamp-on meters contain electronic amplifier circuitry to generate a small voltage proportional to the current in the wire between the jaws, that small voltage connected to a voltmeter for convenient readout by a technician. Thus, a clamp-on unit can be an accessory device to a voltmeter, for current measurement. A less accurate type of magnetic-field-sensing ammeter than the clamp-on style is shown in the following photograph: The operating principle for this ammeter is identical to the clamp-on style of meter: the circular magnetic field surrounding a current-carrying conductor deflects the meter’s needle, producing an indication on the scale. Note how there are two current scales on this particular meter: +/- 75 amps and +/- 400 amps. These two measurement scales correspond to the two sets of notches on the back of the meter. Depending on which set of notches the current-carrying conductor is laid in, a given strength of magnetic field will have a different amount of effect on the needle. In effect, the two different positions of the conductor relative to the movement act as two different range resistors in a direct-connection style of ammeter. Review • An ideal ammeter has zero resistance. • A “clamp-on” ammeter measures current through a wire by measuring the strength of the magnetic field around it rather than by becoming part of the circuit, making it an ideal ammeter. • Clamp-on meters make for quick and safe current measurements, because there is no conductive contact between the meter and the circuit.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/08%3A_DC_Metering_Circuits/8.05%3A_Ammeter_Impact_on_Measured_Circuit.txt
Though mechanical ohmmeter (resistance meter) designs are rarely used today, having largely been superseded by digital instruments, their operation is nonetheless intriguing and worthy of study. The purpose of an ohmmeter, of course, is to measure the resistance placed between its leads. This resistance reading is indicated through a mechanical meter movement which operates on electric current. The ohmmeter must then have an internal source of voltage to create the necessary current to operate the movement, and also have appropriate ranging resistors to allow just the right amount of current through the movement at any given resistance. Starting with a simple movement and battery circuit, let’s see how it would function as an ohmmeter: When there is infinite resistance (no continuity between test leads), there is zero current through the meter movement, and the needle points toward the far left of the scale. In this regard, the ohmmeter indication is “backwards” because maximum indication (infinity) is on the left of the scale, while voltage and current meters have zero at the left of their scales. If the test leads of this ohmmeter are directly shorted together (measuring zero Ω), the meter movement will have a maximum amount of current through it, limited only by the battery voltage and the movement’s internal resistance: With 9 volts of battery potential and only 500 Ω of movement resistance, our circuit current will be 18 mA, which is far beyond the full-scale rating of the movement. Such an excess of current will likely damage the meter. Not only that, but having such a condition limits the usefulness of the device. If full left-of-scale on the meter face represents an infinite amount of resistance, then full right-of-scale should represent zero. Currently, our design “pegs” the meter movement hard to the right when zero resistance is attached between the leads. We need a way to make it so that the movement just registers full-scale when the test leads are shorted together. This is accomplished by adding a series resistance to the meter’s circuit: To determine the proper value for R, we calculate the total circuit resistance needed to limit current to 1 mA (full-scale deflection on the movement) with 9 volts of potential from the battery, then subtract the movement’s internal resistance from that figure: Now that the right value for R has been calculated, we’re still left with a problem of meter range. On the left side of the scale we have “infinity” and on the right side we have zero. Besides being “backwards” from the scales of voltmeters and ammeters, this scale is strange because it goes from nothing to everything, rather than from nothing to a finite value (such as 10 volts, 1 amp, etc.). One might pause to wonder, “what does middle-of-scale represent? What figure lies exactly between zero and infinity?” Infinity is more than just a very big amount: it is an incalculable quantity, larger than any definite number ever could be. If half-scale indication on any other type of meter represents 1/2 of the full-scale range value, then what is half of infinity on an ohmmeter scale? The answer to this paradox is a nonlinear scale. Simply put, the scale of an ohmmeter does not smoothly progress from zero to infinity as the needle sweeps from right to left. Rather, the scale starts out “expanded” at the right-hand side, with the successive resistance values growing closer and closer to each other toward the left side of the scale: Infinity cannot be approached in a linear (even) fashion, because the scale would never get there! With a nonlinear scale, the amount of resistance spanned for any given distance on the scale increases as the scale progresses toward infinity, making infinity an attainable goal. We still have a question of range for our ohmmeter, though. What value of resistance between the test leads will cause exactly 1/2 scale deflection of the needle? If we know that the movement has a full-scale rating of 1 mA, then 0.5 mA (500 µA) must be the value needed for half-scale deflection. Following our design with the 9 volt battery as a source we get: With an internal movement resistance of 500 Ω and a series range resistor of 8.5 kΩ, this leaves 9 kΩ for an external (lead-to-lead) test resistance at 1/2 scale. In other words, the test resistance giving 1/2 scale deflection in an ohmmeter is equal in value to the (internal) series total resistance of the meter circuit. Using Ohm’s Law a few more times, we can determine the test resistance value for 1/4 and 3/4 scale deflection as well: 1/4 scale deflection (0.25 mA of meter current): 3/4 scale deflection (0.75 mA of meter current): So, the scale for this ohmmeter looks something like this: One major problem with this design is its reliance upon a stable battery voltage for accurate resistance reading. If the battery voltage decreases (as all chemical batteries do with age and use), the ohmmeter scale will lose accuracy. With the series range resistor at a constant value of 8.5 kΩ and the battery voltage decreasing, the meter will no longer deflect full-scale to the right when the test leads are shorted together (0 Ω). Likewise, a test resistance of 9 kΩ will fail to deflect the needle to exactly 1/2 scale with a lesser battery voltage. There are design techniques used to compensate for varying battery voltage, but they do not completely take care of the problem and are to be considered approximations at best. For this reason, and for the fact of the nonlinear scale, this type of ohmmeter is never considered to be a precision instrument. One final caveat needs to be mentioned with regard to ohmmeters: they only function correctly when measuring resistance that is not being powered by a voltage or current source. In other words, you cannot measure resistance with an ohmmeter on a “live” circuit! The reason for this is simple: the ohmmeter’s accurate indication depends on the only source of voltage being its internal battery. The presence of any voltage across the component to be measured will interfere with the ohmmeter’s operation. If the voltage is large enough, it may even damage the ohmmeter. Review • Ohmmeters contain internal sources of voltage to supply power in taking resistance measurements. • An analog ohmmeter scale is “backwards” from that of a voltmeter or ammeter, the movement needle reading zero resistance at full-scale and infinite resistance at rest. • Analog ohmmeters also have nonlinear scales, “expanded” at the low end of the scale and “compressed” at the high end to be able to span from zero to infinite resistance. • Analog ohmmeters are not precision instruments. • Ohmmeters should never be connected to an energized circuit (that is, a circuit with its own source of voltage). Any voltage applied to the test leads of an ohmmeter will invalidate its reading.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/08%3A_DC_Metering_Circuits/8.06%3A_Ohmmeter_Design.txt
Most ohmmeters of the design shown in the previous section utilize a battery of relatively low voltage, usually nine volts or less. This is perfectly adequate for measuring resistances under several mega-ohms (MΩ), but when extremely high resistances need to be measured, a 9 volt battery is insufficient for generating enough current to actuate an electromechanical meter movement. Also, as discussed in an earlier chapter, resistance is not always a stable (linear) quantity. This is especially true of non-metals. Recall the graph of current over voltage for a small air gap (less than an inch): While this is an extreme example of nonlinear conduction, other substances exhibit similar insulating/conducting properties when exposed to high voltages. Obviously, an ohmmeter using a low-voltage battery as a source of power cannot measure resistance at the ionization potential of a gas, or at the breakdown voltage of an insulator. If such resistance values need to be measured, nothing but a high-voltage ohmmeter will suffice. The most direct method of high-voltage resistance measurement involves simply substituting a higher voltage battery in the same basic design of ohmmeter investigated earlier: Knowing, however, that the resistance of some materials tends to change with applied voltage, it would be advantageous to be able to adjust the voltage of this ohmmeter to obtain resistance measurements under different conditions: Unfortunately, this would create a calibration problem for the meter. If the meter movement deflects full-scale with a certain amount of current through it, the full-scale range of the meter in ohms would change as the source voltage changed. Imagine connecting a stable resistance across the test leads of this ohmmeter while varying the source voltage: as the voltage is increased, there will be more current through the meter movement, hence a greater amount of deflection. What we really need is a meter movement that will produce a consistent, stable deflection for any stable resistance value measured, regardless of the applied voltage. Accomplishing this design goal requires a special meter movement, one that is peculiar to megohmmeters, or meggers, as these instruments are known. The numbered, rectangular blocks in the above illustration are cross-sectional representations of wire coils. These three coils all move with the needle mechanism. There is no spring mechanism to return the needle to a set position. When the movement is unpowered, the needle will randomly “float.” The coils are electrically connected like this: With infinite resistance between the test leads (open circuit), there will be no current through coil 1, only through coils 2 and 3. When energized, these coils try to center themselves in the gap between the two magnet poles, driving the needle fully to the right of the scale where it points to “infinity.” Any current through coil 1 (through a measured resistance connected between the test leads) tends to drive the needle to the left of scale, back to zero. The internal resistor values of the meter movement are calibrated so that when the test leads are shorted together, the needle deflects exactly to the 0 Ω position. Because any variations in battery voltage will affect the torque generated by both sets of coils (coils 2 and 3, which drive the needle to the right, and coil 1, which drives the needle to the left), those variations will have no effect of the calibration of the movement. In other words, the accuracy of this ohmmeter movement is unaffected by battery voltage: a given amount of measured resistance will produce a certain needle deflection, no matter how much or little battery voltage is present. The only effect that a variation in voltage will have on meter indication is the degree to which the measured resistance changes with applied voltage. So, if we were to use a megger to measure the resistance of a gas-discharge lamp, it would read very high resistance (needle to the far right of the scale) for low voltages and low resistance (needle moves to the left of the scale) for high voltages. This is precisely what we expect from a good high-voltage ohmmeter: to provide accurate indication of subject resistance under different circumstances. For maximum safety, most meggers are equipped with hand-crank generators for producing the high DC voltage (up to 1000 volts). If the operator of the meter receives a shock from the high voltage, the condition will be self-correcting, as he or she will naturally stop cranking the generator! Sometimes a “slip clutch” is used to stabilize generator speed under different cranking conditions, so as to provide a fairly stable voltage whether it is cranked fast or slow. Multiple voltage output levels from the generator are available by the setting of a selector switch. A simple hand-crank megger is shown in this photograph: Some meggers are battery-powered to provide greater precision in output voltage. For safety reasons these meggers are activated by a momentary-contact pushbutton switch, so the switch cannot be left in the “on” position and pose a significant shock hazard to the meter operator. Real meggers are equipped with three connection terminals, labeled Line, Earth, and Guard. The schematic is quite similar to the simplified version shown earlier: Resistance is measured between the Line and Earth terminals, where current will travel through coil 1. The “Guard” terminal is provided for special testing situations where one resistance must be isolated from another. Take for instance this scenario where the insulation resistance is to be tested in a two-wire cable: To measure insulation resistance from a conductor to the outside of the cable, we need to connect the “Line” lead of the megger to one of the conductors and connect the “Earth” lead of the megger to a wire wrapped around the sheath of the cable: In this configuration the megger should read the resistance between one conductor and the outside sheath. Or will it? If we draw a schematic diagram showing all insulation resistances as resistor symbols, what we have looks like this: Rather than just measure the resistance of the second conductor to the sheath (Rc2-s), what we’ll actually measure is that resistance in parallel with the series combination of conductor-to-conductor resistance (Rc1-c2) and the first conductor to the sheath (Rc1-s). If we don’t care about this fact, we can proceed with the test as configured. If we desire to measure only the resistance between the second conductor and the sheath (Rc2-s), then we need to use the megger’s “Guard” terminal: Now the circuit schematic looks like this: Connecting the “Guard” terminal to the first conductor places the two conductors at almost equal potential. With little or no voltage between them, the insulation resistance is nearly infinite, and thus there will be no current between the two conductors. Consequently, the megger’s resistance indication will be based exclusively on the current through the second conductor’s insulation, through the cable sheath, and to the wire wrapped around, not the current leaking through the first conductor’s insulation. Meggers are field instruments: that is, they are designed to be portable and operated by a technician on the job site with as much ease as a regular ohmmeter. They are very useful for checking high-resistance “short” failures between wires caused by wet or degraded insulation. Because they utilize such high voltages, they are not as affected by stray voltages (voltages less than 1 volt produced by electrochemical reactions between conductors, or “induced” by neighboring magnetic fields) as ordinary ohmmeters. For a more thorough test of wire insulation, another high-voltage ohmmeter commonly called a hi-pot tester is used. These specialized instruments produce voltages in excess of 1 kV, and may be used for testing the insulating effectiveness of oil, ceramic insulators, and even the integrity of other high-voltage instruments. Because they are capable of producing such high voltages, they must be operated with the utmost care, and only by trained personnel. It should be noted that hi-pot testers and even meggers (in certain conditions) are capable of damaging wire insulation if incorrectly used. Once an insulating material has been subjected to breakdown by the application of an excessive voltage, its ability to electrically insulate will be compromised. Again, these instruments are to be used only by trained personnel.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/08%3A_DC_Metering_Circuits/8.07%3A_High_Voltage_Ohmmeters.txt
Seeing as how a common meter movement can be made to function as a voltmeter, ammeter, or ohmmeter simply by connecting it to different external resistor networks, it should make sense that a multi-purpose meter (“multimeter”) could be designed in one unit with the appropriate switch(es) and resistors. For general purpose electronics work, the multimeter reigns supreme as the instrument of choice. No other device is able to do so much with so little an investment in parts and elegant simplicity of operation. As with most things in the world of electronics, the advent of solid-state components like transistors has revolutionized the way things are done, and multimeter design is no exception to this rule. However, in keeping with this chapter’s emphasis on analog (“old-fashioned”) meter technology, I’ll show you a few pre-transistor meters. The unit shown above is typical of a handheld analog multimeter, with ranges for voltage, current, and resistance measurement. Note the many scales on the face of the meter movement for the different ranges and functions selectable by the rotary switch. The wires for connecting this instrument to a circuit (the “test leads”) are plugged into the two copper jacks (socket holes) at the bottom-center of the meter face marked “- TEST +”, black and red. This multimeter (Barnett brand) takes a slightly different design approach than the previous unit. Note how the rotary selector switch has fewer positions than the previous meter, but also how there are many more jacks into which the test leads may be plugged into. Each one of those jacks is labeled with a number indicating the respective full-scale range of the meter. Lastly, here is a picture of a digital multimeter. Note that the familiar meter movement has been replaced by a blank, gray-colored display screen. When powered, numerical digits appear in that screen area, depicting the amount of voltage, current, or resistance being measured. This particular brand and model of digital meter has a rotary selector switch and four jacks into which test leads can be plugged. Two leads—one red and one black—are shown plugged into the meter. A close examination of this meter will reveal one “common” jack for the black test lead and three others for the red test lead. The jack into which the red lead is shown inserted is labeled for voltage and resistance measurement, while the other two jacks are labeled for current (A, mA, and µA) measurement. This is a wise design feature of the multimeter, requiring the user to move a test lead plug from one jack to another in order to switch from the voltage measurement to the current measurement function. It would be hazardous to have the meter set in current measurement mode while connected across a significant source of voltage because of the low input resistance, and making it necessary to move a test lead plug rather than just flip the selector switch to a different position helps ensure that the meter doesn’t get set to measure current unintentionally. Note that the selector switch still has different positions for voltage and current measurement, so in order for the user to switch between these two modes of measurement they must switch the position of the red test lead and move the selector switch to a different position. Also note that neither the selector switch nor the jacks are labeled with measurement ranges. In other words, there are no “100 volt” or “10 volt” or “1 volt” ranges (or any equivalent range steps) on this meter. Rather, this meter is “autoranging,” meaning that it automatically picks the appropriate range for the quantity being measured. Autoranging is a feature only found on digital meters, but not all digital meters. No two models of multimeters are designed to operate exactly the same, even if they’re manufactured by the same company. In order to fully understand the operation of any multimeter, the owner’s manual must be consulted. Here is a schematic for a simple analog volt/ammeter: In the switch’s three lower (most counter-clockwise) positions, the meter movement is connected to the Common and V jacks through one of three different series range resistors (Rmultiplier1 through Rmultiplier3), and so acts as a voltmeter. In the fourth position, the meter movement is connected in parallel with the shunt resistor, and so acts as an ammeter for any current entering the common jack and exiting the A jack. In the last (furthest clockwise) position, the meter movement is disconnected from either red jack, but short-circuited through the switch. This short-circuiting creates a dampening effect on the needle, guarding against mechanical shock damage when the meter is handled and moved. If an ohmmeter function is desired in this multimeter design, it may be substituted for one of the three voltage ranges as such: With all three fundamental functions available, this multimeter may also be known as a volt-ohm-milliammeter. Obtaining a reading from an analog multimeter when there is a multitude of ranges and only one meter movement may seem daunting to the new technician. On an analog multimeter, the meter movement is marked with several scales, each one useful for at least one range setting. Here is a close-up photograph of the scale from the Barnett multimeter shown earlier in this section: Note that there are three types of scales on this meter face: a green scale for resistance at the top, a set of black scales for DC voltage and current in the middle, and a set of blue scales for AC voltage and current at the bottom. Both the DC and AC scales have three sub-scales, one ranging 0 to 2.5, one ranging 0 to 5, and one ranging 0 to 10. The meter operator must choose whichever scale best matches the range switch and plug settings in order to properly interpret the meter’s indication. This particular multimeter has several basic voltage measurement ranges: 2.5 volts, 10 volts, 50 volts, 250 volts, 500 volts, and 1000 volts. With the use of the voltage range extender unit at the top of the multimeter, voltages up to 5000 volts can be measured. Suppose the meter operator chose to switch the meter into the “volt” function and plug the red test lead into the 10 volt jack. To interpret the needle’s position, he or she would have to read the scale ending with the number “10”. If they moved the red test plug into the 250 volt jack, however, they would read the meter indication on the scale ending with “2.5”, multiplying the direct indication by a factor of 100 in order to find what the measured voltage was. If current is measured with this meter, another jack is chosen for the red plug to be inserted into and the range is selected via a rotary switch. This close-up photograph shows the switch set to the 2.5 mA position: Note how all current ranges are power-of-ten multiples of the three scale ranges shown on the meter face: 2.5, 5, and 10. In some range settings, such as the 2.5 mA for example, the meter indication may be read directly on the 0 to 2.5 scale. For other range settings (250 µA, 50 mA, 100 mA, and 500 mA), the meter indication must be read off the appropriate scale and then multiplied by either 10 or 100 to obtain the real figure. The highest current range available on this meter is obtained with the rotary switch in the 2.5/10 amp position. The distinction between 2.5 amps and 10 amps is made by the red test plug position: a special “10 amp” jack next to the regular current-measuring jack provides an alternative plug setting to select the higher range. Resistance in ohms, of course, is read by a nonlinear scale at the top of the meter face. It is “backward,” just like all battery-operated analog ohmmeters, with zero at the right-hand side of the face and infinity at the left-hand side. There is only one jack provided on this particular multimeter for “ohms,” so different resistance-measuring ranges must be selected by the rotary switch. Notice on the switch how five different “multiplier” settings are provided for measuring resistance: Rx1, Rx10, Rx100, Rx1000, and Rx10000. Just as you might suspect, the meter indication is given by multiplying whatever needle position is shown on the meter face by the power-of-ten multiplying factor set by the rotary switch.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/08%3A_DC_Metering_Circuits/8.08%3A_Multimeters.txt
Suppose we wished to measure the resistance of some component located a significant distance away from our ohmmeter. Such a scenario would be problematic, because an ohmmeter measures all resistance in the circuit loop, which includes the resistance of the wires (Rwire) connecting the ohmmeter to the component being measured (Rsubject): Usually, wire resistance is very small (only a few ohms per hundreds of feet, depending primarily on the gauge (size) of the wire), but if the connecting wires are very long, and/or the component to be measured has a very low resistance anyway, the measurement error introduced by wire resistance will be substantial. An ingenious method of measuring the subject resistance in a situation like this involves the use of both an ammeter and a voltmeter. We know from Ohm’s Law that resistance is equal to voltage divided by current (R = E/I). Thus, we should be able to determine the resistance of the subject component if we measure the current going through it and the voltage dropped across it: Current is the same at all points in the circuit, because it is a series loop. Because we’re only measuring voltage dropped across the subject resistance (and not the wires’ resistances), though, the calculated resistance is indicative of the subject component’s resistance (Rsubject) alone. Our goal, though, was to measure this subject resistance from a distance, so our voltmeter must be located somewhere near the ammeter, connected across the subject resistance by another pair of wires containing resistance: At first it appears that we have lost any advantage of measuring resistance this way, because the voltmeter now has to measure voltage through a long pair of (resistive) wires, introducing stray resistance back into the measuring circuit again. However, upon closer inspection it is seen that nothing is lost at all, because the voltmeter’s wires carry miniscule current. Thus, those long lengths of wire connecting the voltmeter across the subject resistance will drop insignificant amounts of voltage, resulting in a voltmeter indication that is very nearly the same as if it were connected directly across the subject resistance: Any voltage dropped across the main current-carrying wires will not be measured by the voltmeter, and so do not factor into the resistance calculation at all. Measurement accuracy may be improved even further if the voltmeter’s current is kept to a minimum, either by using a high-quality (low full-scale current) movement and/or a potentiometric (null-balance) system. This method of measurement which avoids errors caused by wire resistance is called the Kelvin, or 4-wiremethod. Special connecting clips called Kelvin clips are made to facilitate this kind of connection across a subject resistance: In regular, “alligator” style clips, both halves of the jaw are electrically common to each other, usually joined at the hinge point. In Kelvin clips, the jaw halves are insulated from each other at the hinge point, only contacting at the tips where they clasp the wire or terminal of the subject being measured. Thus, current through the “C” (“current”) jaw halves does not go through the “P” (“potential,” or voltage) jaw halves, and will not create any error-inducing voltage drop along their length: The same principle of using different contact points for current conduction and voltage measurement is used in precision shunt resistors for measuring large amounts of current. As discussed previously, shunt resistors function as current measurement devices by dropping a precise amount of voltage for every amp of current through them, the voltage drop being measured by a voltmeter. In this sense, a precision shunt resistor “converts” a current value into a proportional voltage value. Thus, current may be accurately measured by measuring voltage dropped across the shunt: Current measurement using a shunt resistor and voltmeter is particularly well-suited for applications involving particularly large magnitudes of current. In such applications, the shunt resistor’s resistance will likely be in the order of milliohms or microohms, so that only a modest amount of voltage will be dropped at full current. Resistance this low is comparable to wire connection resistance, which means voltage measured across such a shunt must be done so in such a way as to avoid detecting voltage dropped across the current-carrying wire connections, lest huge measurement errors be induced. In order that the voltmeter measure only the voltage dropped by the shunt resistance itself, without any stray voltages originating from wire or connection resistance, shunts are usually equipped with four connection terminals: In metrological (metrology = “the science of measurement”) applications, where accuracy is of paramount importance, highly precise “standard” resistors are also equipped with four terminals: two for carrying the measured current, and two for conveying the resistor’s voltage drop to the voltmeter. This way, the voltmeter only measures voltage dropped across the precision resistance itself, without any stray voltages dropped across current-carrying wires or wire-to-terminal connection resistances. The following photograph shows a precision standard resistor of 1 Ω value immersed in a temperature-controlled oil bath with a few other standard resistors. Note the two large, outer terminals for current, and the two small connection terminals for voltage: Here is another, older (pre-World War II) standard resistor of German manufacture. This unit has a resistance of 0.001 Ω, and again the four terminal connection points can be seen as black knobs (metal pads underneath each knob for direct metal-to-metal connection with the wires), two large knobs for securing the current-carrying wires, and two smaller knobs for securing the voltmeter (“potential”) wires: Appreciation is extended to the Fluke Corporation in Everett, Washington for allowing me to photograph these expensive and somewhat rare standard resistors in their primary standards laboratory. It should be noted that resistance measurement using both an ammeter and a voltmeter is subject to compound error. Because the accuracy of both instruments factors in to the final result, the overall measurement accuracy may be worse than either instrument considered alone. For instance, if the ammeter is accurate to +/- 1% and the voltmeter is also accurate to +/- 1%, any measurement dependent on the indications of both instruments may be inaccurate by as much as +/- 2%. Greater accuracy may be obtained by replacing the ammeter with a standard resistor, used as a current-measuring shunt. There will still be compound error between the standard resistor and the voltmeter used to measure voltage drop, but this will be less than with a voltmeter + ammeter arrangement because typical standard resistor accuracy far exceeds typical ammeter accuracy. Using Kelvin clips to make connection with the subject resistance, the circuit looks something like this: All current-carrying wires in the above circuit are shown in “bold,” to easily distinguish them from wires connecting the voltmeter across both resistances (Rsubject and Rstandard). Ideally, a potentiometric voltmeter is used to ensure as little current through the “potential” wires as possible. The Kelvin measurement can be a practical tool for finding poor connections or unexpected resistance in an electrical circuit. Connect a DC power supply to the circuit and adjust the power supply so that it supplies a constant current to the circuit as shown in the diagram above (within the circuit’s capabilities, of course). With a digital multimeter set to measure DC voltage, measure the voltage drop across various points in the circuit. If you know the wire size, you can estimate the voltage drop you should see and compare this to the voltage drop you measure. This can be a quick and effective method of finding poor connections in wiring exposed to the elements, such as in the lighting circuits of a trailer. It can also work well for unpowered AC conductors (make sure the AC power cannot be turned on). For example, you can measure the voltage drop across a light switch and determine if the wiring connections to the switch or the switch’s contacts are suspect. To be most effective using this technique, you should also measure the same type of circuits after they are newly made so you have a feel for the “correct” values. If you use this technique on new circuits and put the results in a log book, you have valuable information for troubleshooting in the future.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/08%3A_DC_Metering_Circuits/8.09%3A_Kelvin_%284-wire%29_Resistance_Measurement.txt
No text on electrical metering could be called complete without a section on bridge circuits. These ingenious circuits make use of a null-balance meter to compare two voltages, just like the laboratory balance scale compares two weights and indicates when they’re equal. Unlike the “potentiometer” circuit used to simply measure an unknown voltage, bridge circuits can be used to measure all kinds of electrical values, not the least of which being resistance. The standard bridge circuit, often called a Wheatstone bridge, looks something like this: When the voltage between point 1 and the negative side of the battery is equal to the voltage between point 2 and the negative side of the battery, the null detector will indicate zero and the bridge is said to be “balanced.” The bridge’s state of balance is solely dependent on the ratios of Ra/Rb and R1/R2, and is quite independent of the supply voltage (battery). To measure resistance with a Wheatstone bridge, an unknown resistance is connected in the place of Ra or Rb, while the other three resistors are precision devices of known value. Either of the other three resistors can be replaced or adjusted until the bridge is balanced, and when balance has been reached the unknown resistor value can be determined from the ratios of the known resistances. A requirement for this to be a measurement system is to have a set of variable resistors available whose resistances are precisely known, to serve as reference standards. For example, if we connect a bridge circuit to measure an unknown resistance Rx, we will have to know the exact values of the other three resistors at balance to determine the value of Rx: Each of the four resistances in a bridge circuit are referred to as arms. The resistor in series with the unknown resistance Rx (this would be Ra in the above schematic) is commonly called the rheostat of the bridge, while the other two resistors are called the ratio arms of the bridge. Accurate and stable resistance standards, thankfully, are not that difficult to construct. In fact, they were some of the first electrical “standard” devices made for scientific purposes. Here is a photograph of an antique resistance standard unit: This resistance standard shown here is variable in discrete steps: the amount of resistance between the connection terminals could be varied with the number and pattern of removable copper plugs inserted into sockets. Wheatstone bridges are considered a superior means of resistance measurement to the series battery-movement-resistor meter circuit discussed in the last section. Unlike that circuit, with all its nonlinearities (nonlinear scale) and associated inaccuracies, the bridge circuit is linear (the mathematics describing its operation are based on simple ratios and proportions) and quite accurate. Given standard resistances of sufficient precision and a null detector device of sufficient sensitivity, resistance measurement accuracies of at least +/- 0.05% are attainable with a Wheatstone bridge. It is the preferred method of resistance measurement in calibration laboratories due to its high accuracy. There are many variations of the basic Wheatstone bridge circuit. Most DC bridges are used to measure resistance, while bridges powered by alternating current (AC) may be used to measure different electrical quantities like inductance, capacitance, and frequency. An interesting variation of the Wheatstone bridge is the Kelvin Double bridge, used for measuring very low resistances (typically less than 1/10 of an ohm). Its schematic diagram is as such: The low-value resistors are represented by thick-line symbols, and the wires connecting them to the voltage source (carrying high current) are likewise drawn thickly in the schematic. This oddly-configured bridge is perhaps best understood by beginning with a standard Wheatstone bridge set up for measuring low resistance, and evolving it step-by-step into its final form in an effort to overcome certain problems encountered in the standard Wheatstone configuration. If we were to use a standard Wheatstone bridge to measure low resistance, it would look something like this: When the null detector indicates zero voltage, we know that the bridge is balanced and that the ratios Ra/Rxand RM/RN are mathematically equal to each other. Knowing the values of Ra, RM, and RN therefore provides us with the necessary data to solve for Rx . . . almost. We have a problem, in that the connections and connecting wires between Ra and Rx possess resistance as well, and this stray resistance may be substantial compared to the low resistances of Ra and Rx. These stray resistances will drop substantial voltage, given the high current through them, and thus will affect the null detector’s indication and thus the balance of the bridge: Since we don’t want to measure these stray wire and connection resistances, but only measure Rx, we must find some way to connect the null detector so that it won’t be influenced by voltage dropped across them. If we connect the null detector and RM/RN ratio arms directly across the ends of Ra and Rx, this gets us closer to a practical solution: Now the top two Ewire voltage drops are of no effect to the null detector, and do not influence the accuracy of Rx‘s resistance measurement. However, the two remaining Ewire voltage drops will cause problems, as the wire connecting the lower end of Ra with the top end of Rx is now shunting across those two voltage drops, and will conduct substantial current, introducing stray voltage drops along its own length as well. Knowing that the left side of the null detector must connect to the two near ends of Ra and Rx in order to avoid introducing those Ewire voltage drops into the null detector’s loop, and that any direct wire connecting those ends of Ra and Rx will itself carry substantial current and create more stray voltage drops, the only way out of this predicament is to make the connecting path between the lower end of Ra and the upper end of Rx substantially resistive: We can manage the stray voltage drops between Ra and Rx by sizing the two new resistors so that their ratio from upper to lower is the same ratio as the two ratio arms on the other side of the null detector. This is why these resistors were labeled Rm and Rn in the original Kelvin Double bridge schematic: to signify their proportionality with RM and RN: With ratio Rm/Rn set equal to ratio RM/RN, rheostat arm resistor Ra is adjusted until the null detector indicates balance, and then we can say that Ra/Rx is equal to RM/RN, or simply find Rx by the following equation: The actual balance equation of the Kelvin Double bridge is as follows (Rwire is the resistance of the thick, connecting wire between the low-resistance standard Ra and the test resistance Rx): So long as the ratio between RM and RN is equal to the ratio between Rm and Rn, the balance equation is no more complex than that of a regular Wheatstone bridge, with Rx/Ra equal to RN/RM, because the last term in the equation will be zero, canceling the effects of all resistances except Rx, Ra, RM, and RN. In many Kelvin Double bridge circuits, RM=Rm and RN=Rn. However, the lower the resistances of Rm and Rn, the more sensitive the null detector will be, because there is less resistance in series with it. Increased detector sensitivity is good, because it allows smaller imbalances to be detected, and thus a finer degree of bridge balance to be attained. Therefore, some high-precision Kelvin Double bridges use Rm and Rn values as low as 1/100 of their ratio arm counterparts (RM and RN, respectively). Unfortunately, though, the lower the values of Rm and Rn, the more current they will carry, which will increase the effect of any junction resistances present where Rm and Rn connect to the ends of Ra and Rx. As you can see, high instrument accuracy demands that all error-producing factors be taken into account, and often the best that can be achieved is a compromise minimizing two or more different kinds of errors. Review • Bridge circuits rely on sensitive null-voltage meters to compare two voltages for equality. • A Wheatstone bridge can be used to measure resistance by comparing the unknown resistor against precision resistors of known value, much like a laboratory scale measures an unknown weight by comparing it against known standard weights. • A Kelvin Double bridge is a variant of the Wheatstone bridge used for measuring very low resistances. Its additional complexity over the basic Wheatstone design is necessary for avoiding errors otherwise incurred by stray resistances along the current path between the low-resistance standard and the resistance being measured.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/08%3A_DC_Metering_Circuits/8.10%3A_Bridge_Circuits.txt
Power in an electric circuit is the product (multiplication) of voltage and current, so any meter designed to measure power must account for both of these variables. A special meter movement designed especially for power measurement is called the dynamometer movement, and is similar to a D’Arsonval or Weston movement in that a lightweight coil of wire is attached to the pointer mechanism. However, unlike the D’Arsonval or Weston movement, another (stationary) coil is used instead of a permanent magnet to provide the magnetic field for the moving coil to react against. The moving coil is generally energized by the voltage in the circuit, while the stationary coil is generally energized by the current in the circuit. A dynamometer movement connected in a circuit looks something like this: The top (horizontal) coil of wire measures load current while the bottom (vertical) coil measures load voltage. Just like the lightweight moving coils of voltmeter movements, the (moving) voltage coil of a dynamometer is typically connected in series with a range resistor so that full load voltage is not applied to it. Likewise, the (stationary) current coil of a dynamometer may have precision shunt resistors to divide the load current around it. With custom-built dynamometer movements, shunt resistors are less likely to be needed because the stationary coil can be constructed with as heavy of wire as needed without impacting meter response, unlike the moving coil which must be constructed of lightweight wire for minimum inertia. Review • Wattmeters are often designed around dynamometer meter movements, which employ both voltage and current coils to move a needle. 8.12: Creating Custom Calibration Resistances Often in the course of designing and building electrical meter circuits, it is necessary to have precise resistances to obtain the desired range(s). More often than not, the resistance values required cannot be found in any manufactured resistor unit and therefore must be built by you. One solution to this dilemma is to make your own resistor out of a length of special high-resistance wire. Usually, a small “bobbin” is used as a form for the resulting wire coil, and the coil is wound in such a way as to eliminate any electromagnetic effects: the desired wire length is folded in half, and the looped wire wound around the bobbin so that current through the wire winds clockwise around the bobbin for half the wire’s length, then counter-clockwise for the other half. This is known as a bifilar winding. Any magnetic fields generated by the current are thus canceled, and external magnetic fields cannot induce any voltage in the resistance wire coil: As you might imagine, this can be a labor-intensive process, especially if more than one resistor must be built! Another, easier solution to the dilemma of a custom resistance is to connect multiple fixed-value resistors together in series-parallel fashion to obtain the desired value of resistance. This solution, although potentially time-intensive in choosing the best resistor values for making the first resistance, can be duplicated much faster for creating multiple custom resistances of the same value: A disadvantage of either technique, though, is the fact that both result in a fixed resistance value. In a perfect world where meter movements never lose magnetic strength of their permanent magnets, where temperature and time have no effect on component resistances, and where wire connections maintain zero resistance forever, fixed-value resistors work quite well for establishing the ranges of precision instruments. However, in the real world, it is advantageous to have the ability to calibrate, or adjust, the instrument in the future. It makes sense, then, to use potentiometers (connected as rheostats, usually) as variable resistances for range resistors. The potentiometer may be mounted inside the instrument case so that only a service technician has access to change its value, and the shaft may be locked in place with thread-fastening compound (ordinary nail polish works well for this!) so that it will not move if subjected to vibration. However, most potentiometers provide too large a resistance span over their mechanically-short movement range to allow for precise adjustment. Suppose you desired a resistance of 8.335 kΩ +/- 1 Ω, and wanted to use a 10 kΩ potentiometer (rheostat) to obtain it. A precision of 1 Ω out of a span of 10 kΩ is 1 part in 10,000, or 1/100 of a percent! Even with a 10-turn potentiometer, it will be very difficult to adjust it to any value this finely. Such a feat would be nearly impossible using a standard 3/4 turn potentiometer. So how can we get the resistance value we need and still have room for adjustment? The solution to this problem is to use a potentiometer as part of a larger resistance network which will create a limited adjustment range. Observe the following example: Here, the 1 kΩ potentiometer, connected as a rheostat, provides by itself a 1 kΩ span (a range of 0 Ω to 1 kΩ). Connected in series with an 8 kΩ resistor, this offsets the total resistance by 8,000 Ω, giving an adjustable range of 8 kΩ to 9 kΩ. Now, a precision of +/- 1 Ω represents 1 part in 1000, or 1/10 of a percent of potentiometer shaft motion. This is ten times better, in terms of adjustment sensitivity, than what we had using a 10 kΩ potentiometer. If we desire to make our adjustment capability even more precise—so we can set the resistance at 8.335 kΩ with even greater precision—we may reduce the span of the potentiometer by connecting a fixed-value resistor in parallel with it: Now, the calibration span of the resistor network is only 500 Ω, from 8 kΩ to 8.5 kΩ. This makes a precision of +/- 1 Ω equal to 1 part in 500, or 0.2 percent. The adjustment is now half as sensitive as it was before the addition of the parallel resistor, facilitating much easier calibration to the target value. The adjustment will not be linear, unfortunately (halfway on the potentiometer’s shaft position will not result in 8.25 kΩ total resistance, but rather 8.333 kΩ). Still, it is an improvement in terms of sensitivity, and it is a practical solution to our problem of building an adjustable resistance for a precision instrument!
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/08%3A_DC_Metering_Circuits/8.11%3A_Wattmeter_Design.txt
• 9.1: Analog and Digital Signals A signal is any kind of physical quantity that conveys information.  An analog signal is a kind of signal that is continuously variable, as opposed to having a limited number of steps along its range (called digital). Both analog and digital signals find application in modern electronics. • 9.2: Voltage Signal Systems DC voltage can be used as an analog signal to relay information from one location to another. A major disadvantage of voltage signaling is the possibility that the voltage at the indicator (voltmeter) will be less than the voltage at the signal source, due to line resistance and indicator current draw. This drop in voltage along the conductor length constitutes a measurement error from transmitter to indicator. • 9.3: Current Signal Systems • 9.4: Tachogenerators • 9.5: Thermocouples • 9.6: pH Measurement • 9.7: Strain Gauges If a strip of conductive metal is stretched, it will become skinnier and longer, both changes resulting in an increase of electrical resistance end-to-end. Conversely, if a strip of conductive metal is placed under compressive force (without buckling), it will broaden and shorten. If these stresses are kept within the elastic limit of the metal strip (so that the strip does not permanently deform), the strip can be used as a measuring element for physical force, the amount of applied force infer 09: Electrical Instrumentation Signals Instrumentation is a field of study and work centering on measurement and control of physical processes. These physical processes include pressure, temperature, flow rate, and chemical consistency. An instrument is a device that measures and/or acts to control any kind of physical process. Due to the fact that electrical quantities of voltage and current are easy to measure, manipulate, and transmit over long distances, they are widely used to represent such physical variables and transmit the information to remote locations. A signal is any kind of physical quantity that conveys information. Audible speech is certainly a kind of signal, as it conveys the thoughts (information) of one person to another through the physical medium of sound. Hand gestures are signals, too, conveying information by means of light. This text is another kind of signal, interpreted by your English-trained mind as information about electric circuits. In this chapter, the word signal will be used primarily in reference to an electrical quantity of voltage or current that is used to represent or signify some other physical quantity. An analog signal is a kind of signal that is continuously variable, as opposed to having a limited number of steps along its range (called digital). A well-known example of analog vs. digital is that of clocks: analog being the type with pointers that slowly rotate around a circular scale, and digital being the type with decimal number displays or a “second-hand” that jerks rather than smoothly rotates. The analog clock has no physical limit to how finely it can display the time, as its “hands” move in a smooth, pauseless fashion. The digital clock, on the other hand, cannot convey any unit of time smaller than what its display will allow for. The type of clock with a “second-hand” that jerks in 1-second intervals is a digital device with a minimum resolution of one second. Both analog and digital signals find application in modern electronics, and the distinctions between these two basic forms of information is something to be covered in much greater detail later in this book. For now, I will limit the scope of this discussion to analog signals, since the systems using them tend to be of simpler design. With many physical quantities, especially electrical, analog variability is easy to come by. If such a physical quantity is used as a signal medium, it will be able to represent variations of information with almost unlimited resolution. In the early days of industrial instrumentation, compressed air was used as a signaling medium to convey information from measuring instruments to indicating and controlling devices located remotely. The amount of air pressure corresponded to the magnitude of whatever variable was being measured. Clean, dry air at approximately 20 pounds per square inch (PSI) was supplied from an air compressor through tubing to the measuring instrument and was then regulated by that instrument according to the quantity being measured to produce a corresponding output signal. For example, a pneumatic (air signal) level “transmitter” device set up to measure height of water (the “process variable”) in a storage tank would output a low air pressure when the tank was empty, a medium pressure when the tank was partially full, and a high pressure when the tank was completely full. The “water level indicator” (LI) is nothing more than a pressure gauge measuring the air pressure in the pneumatic signal line. This air pressure, being a signal, is in turn a representation of the water level in the tank. Any variation of level in the tank can be represented by an appropriate variation in the pressure of the pneumatic signal. Aside from certain practical limits imposed by the mechanics of air pressure devices, this pneumatic signal is infinitely variable, able to represent any degree of change in the water’s level, and is therefore analog in the truest sense of the word. Crude as it may appear, this kind of pneumatic signaling system formed the backbone of many industrial measurement and control systems around the world, and still sees use today due to its simplicity, safety, and reliability. Air pressure signals are easily transmitted through inexpensive tubes, easily measured (with mechanical pressure gauges), and are easily manipulated by mechanical devices using bellows, diaphragms, valves, and other pneumatic devices. Air pressure signals are not only useful for measuring physical processes, but for controlling them as well. With a large enough piston or diaphragm, a small air pressure signal can be used to generate a large mechanical force, which can be used to move a valve or other controlling device. Complete automatic control systems have been made using air pressure as the signal medium. They are simple, reliable, and relatively easy to understand. However, the practical limits for air pressure signal accuracy can be too limiting in some cases, especially when the compressed air is not clean and dry, and when the possibility for tubing leaks exist. With the advent of solid-state electronic amplifiers and other technological advances, electrical quantities of voltage and current became practical for use as analog instrument signaling media. Instead of using pneumatic pressure signals to relay information about the fullness of a water storage tank, electrical signals could relay that same information over thin wires (instead of tubing) and not require the support of such expensive equipment as air compressors to operate: Analog electronic signals are still the primary kinds of signals used in the instrumentation world today (January of 2001), but it is giving way to digital modes of communication in many applications (more on that subject later). Despite changes in technology, it is always good to have a thorough understanding of fundamental principles, so the following information will never really become obsolete. One important concept applied in many analog instrumentation signal systems is that of “live zero,” a standard way of scaling a signal so that an indication of 0 percent can be discriminated from the status of a “dead” system. Take the pneumatic signal system as an example: if the signal pressure range for transmitter and indicator was designed to be 0 to 12 PSI, with 0 PSI representing 0 percent of process measurement and 12 PSI representing 100 percent, a received signal of 0 percent could be a legitimate reading of 0 percent measurement or it could mean that the system was malfunctioning (air compressor stopped, tubing broken, transmitter malfunctioning, etc.). With the 0 percent point represented by 0 PSI, there would be no easy way to distinguish one from the other. If, however, we were to scale the instruments (transmitter and indicator) to use a scale of 3 to 15 PSI, with 3 PSI representing 0 percent and 15 PSI representing 100 percent, any kind of a malfunction resulting in zero air pressure at the indicator would generate a reading of -25 percent (0 PSI), which is clearly a faulty value. The person looking at the indicator would then be able to immediately tell that something was wrong. Not all signal standards have been set up with live zero baselines, but the more robust signals standards (3-15 PSI, 4-20 mA) have, and for good reason. REVIEW • A signal is any kind of detectable quantity used to communicate information. • An analog signal is a signal that can be continuously, or infinitely, varied to represent any small amount of change. • Pneumatic, or air pressure, signals used to be used predominately in industrial instrumentation signal systems. This has been largely superseded by analog electrical signals such as voltage and current. • A live zero refers to an analog signal scale using a non-zero quantity to represent 0 percent of real-world measurement so that any system malfunction resulting in a natural “rest” state of zero signal pressure, voltage, or current can be immediately recognized.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/09%3A_Electrical_Instrumentation_Signals/9.01%3A_Analog_and_Digital_Signals.txt
The use of variable voltage for instrumentation signals seems a rather obvious option to explore. Let’s see how a voltage signal instrument might be used to measure and relay information about water tank level: The “transmitter” in this diagram contains its own precision regulated source of voltage, and the potentiometer setting is varied by the motion of a float inside the water tank following the water level. The “indicator” is nothing more than a voltmeter with a scale calibrated to read in some unit height of water (inches, feet, meters) instead of volts. As the water tank level changes, the float will move. As the float moves, the potentiometer wiper will correspondingly be moved, dividing a different proportion of the battery voltage to go across the two-conductor cable and on to the level indicator. As a result, the voltage received by the indicator will be representative of the level of water in the storage tank. This elementary transmitter/indicator system is reliable and easy to understand, but it has its limitations. Perhaps greatest is the fact that the system accuracy can be influenced by excessive cable resistance. Remember that real voltmeters draw small amounts of current, even though it is ideal for a voltmeter not to draw any current at all. This being the case, especially for the kind of heavy, rugged analog meter movement likely used for an industrial-quality system, there will be a small amount of current through the 2-conductor cable wires. The cable, having a small amount of resistance along its length, will consequently drop a small amount of voltage, leaving less voltage across the indicator’s leads than what is across the leads of the transmitter. This loss of voltage, however small, constitutes an error in measurement: Resistor symbols have been added to the wires of the cable to show what is happening in a real system. Bear in mind that these resistances can be minimized with heavy-gauge wire (at additional expense) and/or their effects mitigated through the use of a high-resistance (null-balance?) voltmeter for an indicator (at additional complexity). Despite this inherent disadvantage, voltage signals are still used in many applications because of their extreme design simplicity. One common signal standard is 0-10 volts, meaning that a signal of 0 volts represents 0 percent of measurement, 10 volts represents 100 percent of measurement, 5 volts represents 50 percent of measurement, and so on. Instruments designed to output and/or accept this standard signal range are available for purchase from major manufacturers. A more common voltage range is 1-5 volts, which makes use of the “live zero” concept for circuit fault indication. Review • DC voltage can be used as an analog signal to relay information from one location to another. • A major disadvantage of voltage signaling is the possibility that the voltage at the indicator (voltmeter) will be less than the voltage at the signal source, due to line resistance and indicator current draw. This drop in voltage along the conductor length constitutes a measurement error from transmitter to indicator. 9.03: Current Signal Systems It is possible through the use of electronic amplifiers to design a circuit outputting a constant amount of current rather than a constant amount of voltage. This collection of components is collectively known as a current source, and its symbol looks like this: A current source generates as much or as little voltage as needed across its leads to produce a constant amount of current through it. This is just the opposite of a voltage source (an ideal battery), which will output as much or as little current as demanded by the external circuit in maintaining its output voltage constant. Following the “conventional flow” symbology typical of electronic devices, the arrow points against the direction of electron motion. Apologies for this confusing notation: another legacy of Benjamin Franklin’s false assumption of electron flow! Current sources can be built as variable devices, just like voltage sources, and they can be designed to produce very precise amounts of current. If a transmitter device were to be constructed with a variable current source instead of a variable voltage source, we could design an instrumentation signal system based on current instead of voltage: The internal workings of the transmitter’s current source need not be a concern at this point, only the fact that its output varies in response to changes in the float position, just like the potentiometer setup in the voltage signal system varied voltage output according to float position. Notice now how the indicator is an ammeter rather than a voltmeter (the scale calibrated in inches, feet, or meters of water in the tank, as always). Because the circuit is a series configuration (accounting for the cable resistances), current will be precisely equal through all components. With or without cable resistance, the current at the indicator is exactly the same as the current at the transmitter, and therefore there is no error incurred as there might be with a voltage signal system. This assurance of zero signal degradation is a decided advantage of current signal systems over voltage signal systems. The most common current signal standard in modern use is the 4 to 20 milliamp (4-20 mA) loop, with 4 milliamps representing 0 percent of measurement, 20 milliamps representing 100 percent, 12 milliamps representing 50 percent, and so on. A convenient feature of the 4-20 mA standard is its ease of signal conversion to 1-5 volt indicating instruments. A simple 250 ohm precision resistor connected in series with the circuit will produce 1 volt of drop at 4 milliamps, 5 volts of drop at 20 milliamps, etc: The current loop scale of 4-20 milliamps has not always been the standard for current instruments: for a while there was also a 10-50 milliamp standard, but that standard has since been obsoleted. One reason for the eventual supremacy of the 4-20 milliamp loop was safety: with lower circuit voltages and lower current levels than in 10-50 mA system designs, there was less chance for personal shock injury and/or the generation of sparks capable of igniting flammable atmospheres in certain industrial environments. Review • A current source is a device (usually constructed of several electronic components) that outputs a constant amount of current through a circuit, much like a voltage source (ideal battery) outputting a constant amount of voltage to a circuit. • A current “loop” instrumentation circuit relies on the series circuit principle of current being equal through all components to insure no signal error due to wiring resistance. • The most common analog current signal standard in modern use is the “4 to 20 milliamp current loop.” 9.04: Tachogenerators An electromechanical generator is a device capable of producing electrical power from mechanical energy, usually the turning of a shaft. When not connected to a load resistance, generators will generate voltage roughly proportional to shaft speed. With precise construction and design, generators can be built to produce very precise voltages for certain ranges of shaft speeds, thus making them well-suited as measurement devices for shaft speed in mechanical equipment. A generator specially designed and constructed for this use is called a tachometer or tachogenerator. Often, the word “tach” (pronounced “tack”) is used rather than the whole word. By measuring the voltage produced by a tachogenerator, you can easily determine the rotational speed of whatever its mechanically attached to. One of the more common voltage signal ranges used with tachogenerators is 0 to 10 volts. Obviously, since a tachogenerator cannot produce voltage when its not turning, the zero cannot be “live” in this signal standard. Tachogenerators can be purchased with different “full-scale” (10 volt) speeds for different applications. Although a voltage divider could theoretically be used with a tachogenerator to extend the measurable speed range in the 0-10 volt scale, it is not advisable to significantly overspeed a precision instrument like this, or its life will be shortened. Tachogenerators can also indicate the direction of rotation by the polarity of the output voltage. When a permanent-magnet style DC generator’s rotational direction is reversed, the polarity of its output voltage will switch. In measurement and control systems where directional indication is needed, tachogenerators provide an easy way to determine that. Tachogenerators are frequently used to measure the speeds of electric motors, engines, and the equipment they power: conveyor belts, machine tools, mixers, fans, etc.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/09%3A_Electrical_Instrumentation_Signals/9.02%3A_Voltage_Signal_Systems.txt
An interesting phenomenon applied in the field of instrumentation is the Seebeck effect, which is the production of a small voltage across the length of a wire due to a difference in temperature along that wire. This effect is most easily observed and applied with a junction of two dissimilar metals in contact, each metal producing a different Seebeck voltage along its length, which translates to a voltage between the two (unjoined) wire ends. Most any pair of dissimilar metals will produce a measurable voltage when their junction is heated, some combinations of metals producing more voltage per degree of temperature than others: The Seebeck effect is fairly linear; that is, the voltage produced by a heated junction of two wires is directly proportional to the temperature. This means that the temperature of the metal wire junction can be determined by measuring the voltage produced. Thus, the Seebeck effect provides for us an electric method of temperature measurement. When a pair of dissimilar metals are joined together for the purpose of measuring temperature, the device formed is called a thermocouple. Thermocouples made for instrumentation use metals of high purity for an accurate temperature/voltage relationship (as linear and as predictable as possible). Seebeck voltages are quite small, in the tens of millivolts for most temperature ranges. This makes them somewhat difficult to measure accurately. Also, the fact that any junction between dissimilar metals will produce temperature-dependent voltage creates a problem when we try to connect the thermocouple to a voltmeter, completing a circuit: The second iron/copper junction formed by the connection between the thermocouple and the meter on the top wire will produce a temperature-dependent voltage opposed in polarity to the voltage produced at the measurement junction. This means that the voltage between the voltmeter’s copper leads will be a function of the difference in temperature between the two junctions, and not the temperature at the measurement junction alone. Even for thermocouple types where copper is not one of the dissimilar metals, the combination of the two metals joining the copper leads of the measuring instrument forms a junction equivalent to the measurement junction: This second junction is called the reference or cold junction, to distinguish it from the junction at the measuring end, and there is no way to avoid having one in a thermocouple circuit. In some applications, a differential temperature measurement between two points is required, and this inherent property of thermocouples can be exploited to make a very simple measurement system. However, in most applications the intent is to measure temperature at a single point only, and in these cases the second junction becomes a liability to function. Compensation for the voltage generated by the reference junction is typically performed by a special circuit designed to measure temperature there and produce a corresponding voltage to counter the reference junction’s effects. At this point you may wonder, “If we have to resort to some other form of temperature measurement just to overcome an idiosyncrasy with thermocouples, then why bother using thermocouples to measure temperature at all? Why not just use this other form of temperature measurement, whatever it may be, to do the job?” The answer is this: because the other forms of temperature measurement used for reference junction compensation are not as robust or versatile as a thermocouple junction, but do the job of measuring room temperature at the reference junction site quite well. For example, the thermocouple measurement junction may be inserted into the 1800 degree (F) flue of a foundry holding furnace, while the reference junction sits a hundred feet away in a metal cabinet at ambient temperature, having its temperature measured by a device that could never survive the heat or corrosive atmosphere of the furnace. The voltage produced by thermocouple junctions is strictly dependent upon temperature. Any current in a thermocouple circuit is a function of circuit resistance in opposition to this voltage (I=E/R). In other words, the relationship between temperature and Seebeck voltage is fixed, while the relationship between temperature and current is variable, depending on the total resistance of the circuit. With heavy enough thermocouple conductors, currents upwards of hundreds of amps can be generated from a single pair of thermocouple junctions! (I’ve actually seen this in a laboratory experiment, using heavy bars of copper and copper/nickel alloy to form the junctions and the circuit conductors.) For measurement purposes, the voltmeter used in a thermocouple circuit is designed to have a very high resistance so as to avoid any error-inducing voltage drops along the thermocouple wire. The problem of voltage drop along the conductor length is even more severe here than with the DC voltage signals discussed earlier, because here we only have a few millivolts of voltage produced by the junction. We simply cannot afford to have even a single millivolt of drop along the conductor lengths without incurring serious temperature measurement errors. Ideally, then, current in a thermocouple circuit is zero. Early thermocouple indicating instruments made use of null-balance potentiometric voltage measurement circuitry to measure the junction voltage. The early Leeds & Northrup “Speedomax” line of temperature indicator/recorders were a good example of this technology. More modern instruments use semiconductor amplifier circuits to allow the thermocouple’s voltage signal to drive an indication device with little or no current drawn in the circuit. Thermocouples, however, can be built from heavy-gauge wire for low resistance, and connected in such a way so as to generate very high currents for purposes other than temperature measurement. One such purpose is electric power generation. By connecting many thermocouples in series, alternating hot/cold temperatures with each junction, a device called a thermopile can be constructed to produce substantial amounts of voltage and current: With the left and right sets of junctions at the same temperature, the voltage at each junction will be equal and the opposing polarities would cancel to a final voltage of zero. However, if the left set of junctions were heated and the right set cooled, the voltage at each left junction would be greater than each right junction, resulting in a total output voltage equal to the sum of all junction pair differentials. In a thermopile, this is exactly how things are set up. A source of heat (combustion, strong radioactive substance, solar heat, etc.) is applied to one set of junctions, while the other set is bonded to a heat sink of some sort (air- or water-cooled). Interestingly enough, as electrons flow through an external load circuit connected to the thermopile, heat energy is transferred from the hot junctions to the cold junctions, demonstrating another thermo-electric phenomenon: the so-called Peltier Effect (electric current transferring heat energy). Another application for thermocouples is in the measurement of average temperature between several locations. The easiest way to do this is to connect several thermocouples in parallel with each other. The millivolt signal produced by each thermocouple will average out at the parallel junction point. The voltage differences between the junctions drop along the resistances of the thermocouple wires: Unfortunately, though, the accurate averaging of these Seebeck voltage potentials relies on each thermocouple’s wire resistances being equal. If the thermocouples are located at different places and their wires join in parallel at a single location, equal wire length will be unlikely. The thermocouple having the greatest wire length from point of measurement to parallel connection point will tend to have the greatest resistance, and will therefore have the least effect on the average voltage produced. To help compensate for this, additional resistance can be added to each of the parallel thermocouple circuit branches to make their respective resistances more equal. Without custom-sizing resistors for each branch (to make resistances precisely equal between all the thermocouples), it is acceptable to simply install resistors with equal values, significantly higher than the thermocouple wires’ resistances so that those wire resistances will have a much smaller impact on the total branch resistance. These resistors are called swamping resistors, because their relatively high values overshadow or “swamp” the resistances of the thermocouple wires themselves: Because thermocouple junctions produce such low voltages, it is imperative that wire connections be very clean and tight for accurate and reliable operation. Also, the location of the reference junction (the place where the dissimilar-metal thermocouple wires join to standard copper) must be kept close to the measuring instrument, to ensure that the instrument can accurately compensate for reference junction temperature. Despite these seemingly restrictive requirements, thermocouples remain one of the most robust and popular methods of industrial temperature measurement in modern use. Review • The Seebeck Effect is the production of a voltage between two dissimilar, joined metals that is proportional to the temperature of that junction. • In any thermocouple circuit, there are two equivalent junctions formed between dissimilar metals. The junction placed at the site of intended measurement is called the measurement junction, while the other (single or equivalent) junction is called the reference junction. • Two thermocouple junctions can be connected in opposition to each other to generate a voltage signal proportional to differential temperature between the two junctions. A collection of junctions so connected for the purpose of generating electricity is called a thermopile. • When electrons flow through the junctions of a thermopile, heat energy is transferred from one set of junctions to the other. This is known as the Peltier Effect. • Multiple thermocouple junctions can be connected in parallel with each other to generate a voltage signal representing the average temperature between the junctions. “Swamping” resistors may be connected in series with each thermocouple to help maintain equality between the junctions, so the resultant voltage will be more representative of a true average temperature. • It is imperative that current in a thermocouple circuit be kept as low as possible for good measurement accuracy. Also, all related wire connections should be clean and tight. Mere millivolts of drop at any place in the circuit will cause substantial measurement errors.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/09%3A_Electrical_Instrumentation_Signals/9.05%3A_Thermocouples.txt
A very important measurement in many liquid chemical processes (industrial, pharmaceutical, manufacturing, food production, etc.) is that of pH: the measurement of hydrogen ion concentration in a liquid solution. A solution with a low pH value is called an “acid,” while one with a high pH is called a “caustic.” The common pH scale extends from 0 (strong acid) to 14 (strong caustic), with 7 in the middle representing pure water (neutral): pH is defined as follows: the lower-case letter “p” in pH stands for the negative common (base ten) logarithm, while the upper-case letter “H” stands for the element hydrogen. Thus, pH is a logarithmic measurement of the number of moles of hydrogen ions (H+) per liter of solution. Incidentally, the “p” prefix is also used with other types of chemical measurements where a logarithmic scale is desired, pCO2 (Carbon Dioxide) and pO2 (Oxygen) being two such examples. The logarithmic pH scale works like this: a solution with 10-12 moles of H+ ions per liter has a pH of 12; a solution with 10-3 moles of H+ ions per liter has a pH of 3. While very uncommon, there is such a thing as an acid with a pH measurement below 0 and a caustic with a pH above 14. Such solutions, understandably, are quite concentrated and extremely reactive. While pH can be measured by color changes in certain chemical powders (the “litmus strip” being a familiar example from high school chemistry classes), continuous process monitoring and control of pH requires a more sophisticated approach. The most common approach is the use of a specially-prepared electrode designed to allow hydrogen ions in the solution to migrate through a selective barrier, producing a measurable potential (voltage) difference proportional to the solution’s pH: The design and operational theory of pH electrodes is a very complex subject, explored only briefly here. What is important to understand is that these two electrodes generate a voltage directly proportional to the pH of the solution. At a pH of 7 (neutral), the electrodes will produce 0 volts between them. At a low pH (acid) a voltage will be developed of one polarity, and at a high pH (caustic) a voltage will be developed of the opposite polarity. An unfortunate design constraint of pH electrodes is that one of them (called the measurement electrode) must be constructed of special glass to create the ion-selective barrier needed to screen out hydrogen ions from all the other ions floating around in the solution. This glass is chemically doped with lithium ions, which is what makes it react electrochemically to hydrogen ions. Of course, glass is not exactly what you would call a “conductor;” rather, it is an extremely good insulator. This presents a major problem if our intent is to measure voltage between the two electrodes. The circuit path from one electrode contact, through the glass barrier, through the solution, to the other electrode, and back through the other electrode’s contact, is one of extremely high resistance. The other electrode (called the reference electrode) is made from a chemical solution of neutral (7) pH buffer solution (usually potassium chloride) allowed to exchange ions with the process solution through a porous separator, forming a relatively low resistance connection to the test liquid. At first, one might be inclined to ask: why not just dip a metal wire into the solution to get an electrical connection to the liquid? The reason this will not work is because metals tend to be highly reactive in ionic solutions and can produce a significant voltage across the interface of metal-to-liquid contact. The use of a wet chemical interface with the measured solution is necessary to avoid creating such a voltage, which of course would be falsely interpreted by any measuring device as being indicative of pH. Here is an illustration of the measurement electrode’s construction. Note the thin, lithium-doped glass membrane across which the pH voltage is generated: Here is an illustration of the reference electrode’s construction. The porous junction shown at the bottom of the electrode is where the potassium chloride buffer and process liquid interface with each other: The measurement electrode’s purpose is to generate the voltage used to measure the solution’s pH. This voltage appears across the thickness of the glass, placing the silver wire on one side of the voltage and the liquid solution on the other. The reference electrode’s purpose is to provide the stable, zero-voltage connection to the liquid solution so that a complete circuit can be made to measure the glass electrode’s voltage. While the reference electrode’s connection to the test liquid may only be a few kilo-ohms, the glass electrode’s resistance may range from ten to nine hundred mega-ohms, depending on electrode design! Being that any current in this circuit must travel through both electrodes’ resistances (and the resistance presented by the test liquid itself), these resistances are in series with each other and therefore add to make an even greater total. An ordinary analog or even digital voltmeter has much too low of an internal resistance to measure voltage in such a high-resistance circuit. The equivalent circuit diagram of a typical pH probe circuit illustrates the problem: Even a very small circuit current traveling through the high resistances of each component in the circuit (especially the measurement electrode’s glass membrane), will produce relatively substantial voltage drops across those resistances, seriously reducing the voltage seen by the meter. Making matters worse is the fact that the voltage differential generated by the measurement electrode is very small, in the millivolt range (ideally 59.16 millivolts per pH unit at room temperature). The meter used for this task must be very sensitive and have an extremely high input resistance. The most common solution to this measurement problem is to use an amplified meter with an extremely high internal resistance to measure the electrode voltage, so as to draw as little current through the circuit as possible. With modern semiconductor components, a voltmeter with an input resistance of up to 1017 Ω can be built with little difficulty. Another approach, seldom seen in contemporary use, is to use a potentiometric “null-balance” voltage measurement setup to measure this voltage without drawing anycurrent from the circuit under test. If a technician desired to check the voltage output between a pair of pH electrodes, this would probably be the most practical means of doing so using only standard benchtop metering equipment: As usual, the precision voltage supply would be adjusted by the technician until the null detector registered zero, then the voltmeter connected in parallel with the supply would be viewed to obtain a voltage reading. With the detector “nulled” (registering exactly zero), there should be zero current in the pH electrode circuit, and therefore no voltage dropped across the resistances of either electrode, giving the real electrode voltage at the voltmeter terminals. Wiring requirements for pH electrodes tend to be even more severe than thermocouple wiring, demanding very clean connections and short distances of wire (10 yards or less, even with gold-plated contacts and shielded cable) for accurate and reliable measurement. As with thermocouples, however, the disadvantages of electrode pH measurement are offset by the advantages: good accuracy and relative technical simplicity. Few instrumentation technologies inspire the awe and mystique commanded by pH measurement, because it is so widely misunderstood and difficult to troubleshoot. Without elaborating on the exact chemistry of pH measurement, a few words of wisdom can be given here about pH measurement systems: • All pH electrodes have a finite life, and that lifespan depends greatly on the type and severity of service. In some applications, a pH electrode life of one month may be considered long, and in other applications the same electrode(s) may be expected to last for over a year. • Because the glass (measurement) electrode is responsible for generating the pH-proportional voltage, it is the one to be considered suspect if the measurement system fails to generate sufficient voltage change for a given change in pH (approximately 59 millivolts per pH unit), or fails to respond quickly enough to a fast change in test liquid pH. • If a pH measurement system “drifts,” creating offset errors, the problem likely lies with the reference electrode, which is supposed to provide a zero-voltage connection with the measured solution. • Because pH measurement is a logarithmic representation of ion concentration, there is an incredible range of process conditions represented in the seemingly simple 0-14 pH scale. Also, due to the nonlinear nature of the logarithmic scale, a change of 1 pH at the top end (say, from 12 to 13 pH) does not represent the same quantity of chemical activity change as a change of 1 pH at the bottom end (say, from 2 to 3 pH). Control system engineers and technicians must be aware of this dynamic if there is to be any hope of controlling process pH at a stable value. • The following conditions are hazardous to measurement (glass) electrodes: high temperatures, extreme pH levels (either acidic or alkaline), high ionic concentration in the liquid, abrasion, hydrofluoric acid in the liquid (HF acid dissolves glass!), and any kind of material coating on the surface of the glass. • Temperature changes in the measured liquid affect both the response of the measurement electrode to a given pH level (ideally at 59 mV per pH unit), and the actual pH of the liquid. Temperature measurement devices can be inserted into the liquid, and the signals from those devices used to compensate for the effect of temperature on pH measurement, but this will only compensate for the measurement electrode’s mV/pH response, not the actual pH change of the process liquid! Advances are still being made in the field of pH measurement, some of which hold great promise for overcoming traditional limitations of pH electrodes. One such technology uses a device called a field-effect transistor to electrostatically measure the voltage produced by an ion-permeable membrane rather than measure the voltage with an actual voltmeter circuit. While this technology harbors limitations of its own, it is at least a pioneering concept, and may prove more practical at a later date. Review • pH is a representation of hydrogen ion activity in a liquid. It is the negative logarithm of the amount of hydrogen ions (in moles) per liter of liquid. Thus: 10-11 moles of hydrogen ions in 1 liter of liquid = 11 pH. 10-5.3 moles of hydrogen ions in 1 liter of liquid = 5.3 pH. • The basic pH scale extends from 0 (strong acid) to 7 (neutral, pure water) to 14 (strong caustic). Chemical solutions with pH levels below zero and above 14 are possible, but rare. • pH can be measured by measuring the voltage produced between two special electrodes immersed in the liquid solution. • One electrode, made of a special glass, is called the measurement electrode. It’s job it to generate a small voltage proportional to pH (ideally 59.16 mV per pH unit). • The other electrode (called the reference electrode) uses a porous junction between the measured liquid and a stable, neutral pH buffer solution (usually potassium chloride) to create a zero-voltage electrical connection to the liquid. This provides a point of continuity for a complete circuit so that the voltage produced across the thickness of the glass in the measurement electrode can be measured by an external voltmeter. • The extremely high resistance of the measurement electrode’s glass membrane mandates the use of a voltmeter with extremely high internal resistance, or a null-balance voltmeter, to measure the voltage.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/09%3A_Electrical_Instrumentation_Signals/9.06%3A_pH_Measurement.txt
What is a Strain Gauge? Such a device is called a strain gauge. Strain gauges are frequently used in mechanical engineering research and development to measure the stresses generated by machinery. Aircraft component testing is one area of application, tiny strain-gauge strips glued to structural members, linkages, and any other critical component of an airframe to measure stress. Most strain gauges are smaller than a postage stamp, and they look something like this: A strain gauge’s conductors are very thin: if made of round wire, about 1/1000 inch in diameter. Alternatively, strain gauge conductors may be thin strips of metallic film deposited on a nonconducting substrate material called the carrier. The latter form of strain gauge is represented in the previous illustration. The name “bonded gauge” is given to strain gauges that are glued to a larger structure under stress (called the test specimen). The task of bonding strain gauges to test specimens may appear to be very simple, but it is not. “Gauging” is a craft in its own right, absolutely essential for obtaining accurate, stable strain measurements. It is also possible to use an unmounted gauge wire stretched between two mechanical points to measure tension, but this technique has its limitations. Strain Gauge Resistance Typical strain gauge resistances range from 30 Ω to 3 kΩ (unstressed). This resistance may change only a fraction of a percent for the full force range of the gauge, given the limitations imposed by the elastic limits of the gauge material and of the test specimen. Forces great enough to induce greater resistance changes would permanently deform the test specimen and/or the gauge conductors themselves, thus ruining the gauge as a measurement device. Thus, in order to use the strain gauge as a practical instrument, we must measure extremely small changes in resistance with high accuracy. Such demanding precision calls for a bridge measurement circuit. Bridge Measurement Circuit Unlike the Wheatstone bridge shown in the last chapter using a null-balance detector and a human operator to maintain a state of balance, a strain gauge bridge circuit indicates measured strain by the degree of imbalance, and uses a precision voltmeter in the center of the bridge to provide an accurate measurement of that imbalance: Typically, the rheostat arm of the bridge (R2 in the diagram) is set at a value equal to the strain gauge resistance with no force applied. The two ratio arms of the bridge (R1 and R3) are set equal to each other. Thus, with no force applied to the strain gauge, the bridge will be symmetrically balanced and the voltmeter will indicate zero volts, representing zero force on the strain gauge. As the strain gauge is either compressed or tensed, its resistance will decrease or increase, respectively, thus unbalancing the bridge and producing an indication at the voltmeter. This arrangement, with a single element of the bridge changing resistance in response to the measured variable (mechanical force), is known as a quarter-bridge circuit. As the distance between the strain gauge and the three other resistances in the bridge circuit may be substantial, wire resistance has a significant impact on the operation of the circuit. To illustrate the effects of wire resistance, I’ll show the same schematic diagram, but add two resistor symbols in series with the strain gauge to represent the wires: Wire Resistances The strain gauge’s resistance (Rgauge) is not the only resistance being measured: the wire resistances Rwire1and Rwire2, being in series with Rgauge, also contribute to the resistance of the lower half of the rheostat arm of the bridge, and consequently contribute to the voltmeter’s indication. This, of course, will be falsely interpreted by the meter as physical strain on the gauge. While this effect cannot be completely eliminated in this configuration, it can be minimized with the addition of a third wire, connecting the right side of the voltmeter directly to the upper wire of the strain gauge: Because the third wire carries practically no current (due to the voltmeter’s extremely high internal resistance), its resistance will not drop any substantial amount of voltage. Notice how the resistance of the top wire (Rwire1) has been “bypassed” now that the voltmeter connects directly to the top terminal of the strain gauge, leaving only the lower wire’s resistance (Rwire2) to contribute any stray resistance in series with the gauge. Not a perfect solution, of course, but twice as good as the last circuit! There is a way, however, to reduce wire resistance error far beyond the method just described, and also help mitigate another kind of measurement error due to temperature. Resistance Change and Temperature An unfortunate characteristic of strain gauges is that of resistance change with changes in temperature. This is a property common to all conductors, some more than others. Thus, our quarter-bridge circuit as shown (either with two or with three wires connecting the gauge to the bridge) works as a thermometer just as well as it does a strain indicator. If all we want to do is measure strain, this is not good. We can transcend this problem, however, by using a “dummy” strain gauge in place of R2, so that both elements of the rheostat arm will change resistance in the same proportion when temperature changes, thus canceling the effects of temperature change: Resistors R1 and R3 are of equal resistance value, and the strain gauges are identical to one another. With no applied force, the bridge should be in a perfectly balanced condition and the voltmeter should register 0 volts. Both gauges are bonded to the same test specimen, but only one is placed in a position and orientation so as to be exposed to physical strain (the active gauge). The other gauge is isolated from all mechanical stress, and acts merely as a temperature compensation device (the “dummy” gauge). If the temperature changes, both gauge resistances will change by the same percentage, and the bridge’s state of balance will remain unaffected. Only a differential resistance (difference of resistance between the two strain gauges) produced by physical force on the test specimen can alter the balance of the bridge. Wire resistance doesn’t impact the accuracy of the circuit as much as before, because the wires connecting both strain gauges to the bridge are approximately equal length. Therefore, the upper and lower sections of the bridge’s rheostat arm contain approximately the same amount of stray resistance, and their effects tend to cancel: Quarter-Bridge and Half Bridge Circuits Even though there are now two strain gauges in the bridge circuit, only one is responsive to mechanical strain, and thus we would still refer to this arrangement as a quarter-bridge. However, if we were to take the upper strain gauge and position it so that it is exposed to the opposite force as the lower gauge (i.e. when the upper gauge is compressed, the lower gauge will be stretched, and vice versa), we will have both gauges responding to strain, and the bridge will be more responsive to applied force. This utilization is known as a half-bridge. Since both strain gauges will either increase or decrease resistance by the same proportion in response to changes in temperature, the effects of temperature change remain canceled and the circuit will suffer minimal temperature-induced measurement error: An example of how a pair of strain gauges may be bonded to a test specimen so as to yield this effect is illustrated here: With no force applied to the test specimen, both strain gauges have equal resistance and the bridge circuit is balanced. However, when a downward force is applied to the free end of the specimen, it will bend downward, stretching gauge #1 and compressing gauge #2 at the same time: Full-Bridge Circuits In applications where such complementary pairs of strain gauges can be bonded to the test specimen, it may be advantageous to make all four elements of the bridge “active” for even greater sensitivity. This is called a full-bridge circuit: Both half-bridge and full-bridge configurations grant greater sensitivity over the quarter-bridge circuit, but often it is not possible to bond complementary pairs of strain gauges to the test specimen. Thus, the quarter-bridge circuit is frequently used in strain measurement systems. When possible, the full-bridge configuration is the best to use. This is true not only because it is more sensitive than the others, but because it is linear while the others are not. Quarter-bridge and half-bridge circuits provide an output (imbalance) signal that is only approximately proportional to applied strain gauge force. Linearity, or proportionality, of these bridge circuits is best when the amount of resistance change due to applied force is very small compared to the nominal resistance of the gauge(s). With a full-bridge, however, the output voltage is directly proportional to applied force, with no approximation (provided that the change in resistance caused by the applied force is equal for all four strain gauges!). Unlike the Wheatstone and Kelvin bridges, which provide measurement at a condition of perfect balance and therefore function irrespective of source voltage, the amount of source (or “excitation”) voltage matters in an unbalanced bridge like this. Therefore, strain gauge bridges are rated in millivolts of imbalance produced per volt of excitation, per unit measure of force. A typical example for a strain gauge of the type used for measuring force in industrial environments is 15 mV/V at 1000 pounds. That is, at exactly 1000 pounds applied force (either compressive or tensile), the bridge will be unbalanced by 15 millivolts for every volt of excitation voltage. Again, such a figure is precise if the bridge circuit is full-active (four active strain gauges, one in each arm of the bridge), but only approximate for half-bridge and quarter-bridge arrangements. Strain gauges may be purchased as complete units, with both strain gauge elements and bridge resistors in one housing, sealed and encapsulated for protection from the elements, and equipped with mechanical fastening points for attachment to a machine or structure. Such a package is typically called a load cell. Like many of the other topics addressed in this chapter, strain gauge systems can become quite complex, and a full dissertation on strain gauges would be beyond the scope of this book. Review • A strain gauge is a thin strip of metal designed to measure mechanical load by changing resistance when stressed (stretched or compressed within its elastic limit). • Strain gauge resistance changes are typically measured in a bridge circuit, to allow for precise measurement of the small resistance changes, and to provide compensation for resistance variations due to temperature.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/09%3A_Electrical_Instrumentation_Signals/9.07%3A_Strain_Gauges.txt
• 10.1: What is Network Analysis? Generally speaking, network analysis is any structured technique used to mathematically analyze a circuit (a “network” of interconnected components). Quite often the technician or engineer will encounter circuits containing multiple sources of power or component configurations which defy simplification by series/parallel analysis techniques. In those cases, he or she will be forced to use other means. This chapter presents a few techniques useful in analyzing such complex circuits. • 10.2: Branch Current Method he first and most straightforward network analysis technique is called the Branch Current Method. In this method, we assume directions of currents in a network, then write equations describing their relationships to each other through Kirchhoff’s and Ohm’s Laws. Once we have one equation for every unknown current, we can solve the simultaneous equations and determine all currents, and therefore all voltage drops in the network. • 10.3: Mesh Current Method and Analysis The Mesh Current Method, also known as the Loop Current Method, is quite similar to the Branch Current method in that it uses simultaneous equations, Kirchhoff’s Voltage Law, and Ohm’s Law to determine unknown currents in a network. It differs from the Branch Current method in that it does not use Kirchhoff’s Current Law, and it is usually able to solve a circuit with less unknown variables and less simultaneous equations, which is especially nice if you’re forced to solve without a calculator. • 10.4: Node Voltage Method The node voltage method of analysis solves for unknown voltages at circuit nodes in terms of a system of KCL equations. This analysis looks strange because it involves replacing voltage sources with equivalent current sources. Also, resistor values in ohms are replaced by equivalent conductances in siemens, G = 1/R. The siemens (S) is the unit of conductance, having replaced the mho unit. In any event S = Ω-1. And S = mho (obsolete). • 10.5: Introduction to Network Theorems • 10.6: Millman’s Theorem In Millman’s Theorem, the circuit is re-drawn as a parallel network of branches, each branch containing a resistor or series battery/resistor combination. Millman’s Theorem is applicable only to those circuits which can be re-drawn accordingly. • 10.7: Superposition Theorem Superposition theorem is one of those strokes of genius that takes a complex subject and simplifies it in a way that makes perfect sense. A theorem like Millman’s certainly works well, but it is not quite obvious why it works so well. Superposition, on the other hand, is obvious. • 10.8: Thevenin’s Theorem Thevenin’s Theorem states that it is possible to simplify any linear circuit, no matter how complex, to an equivalent circuit with just a single voltage source and series resistance connected to a load. The qualification of “linear” is identical to that found in the Superposition Theorem, where all the underlying equations must be linear (no exponents or roots). If we’re dealing with passive components (such as resistors, and later, inductors and capacitors), this is true. However, there are som • 10.9: Norton’s Theorem • 10.10: Thevenin-Norton Equivalencies Since Thevenin’s and Norton’s Theorems are two equally valid methods of reducing a complex network down to something simpler to analyze, there must be some way to convert a Thevenin equivalent circuit to a Norton equivalent circuit, and vice versa (just what you were dying to know, right?). Well, the procedure is very simple. • 10.11: Millman’s Theorem Revisited • 10.12: Maximum Power Transfer Theorem The Maximum Power Transfer Theorem is not so much a means of analysis as it is an aid to system design. Simply stated, the maximum amount of power will be dissipated by a load resistance when that load resistance is equal to the Thevenin/Norton resistance of the network supplying the power. If the load resistance is lower or higher than the Thevenin/Norton resistance of the source network, its dissipated power will be less than the maximum. • 10.13: Δ-Y and Y-Δ Conversions 10: DC Network Analysis To illustrate how even a simple circuit can defy analysis by breakdown into series and parallel portions, take start with this series-parallel circuit: To analyze the above circuit, one would first find the equivalent of R2 and R3 in parallel, then add R1 in series to arrive at a total resistance. Then, taking the voltage of battery B1 with that total circuit resistance, the total current could be calculated through the use of Ohm’s Law (I=E/R), then that current figure used to calculate voltage drops in the circuit. All in all, a fairly simple procedure. However, the addition of just one more battery could change all of that: Resistors R2 and R3 are no longer in parallel with each other, because B2 has been inserted into R3‘s branch of the circuit. Upon closer inspection, it appears there are no two resistors in this circuit directly in series or parallel with each other. This is the crux of our problem: in series-parallel analysis, we started off by identifying sets of resistors that were directly in series or parallel with each other, reducing them to single equivalent resistances. If there are no resistors in a simple series or parallel configuration with each other, then what can we do? It should be clear that this seemingly simple circuit, with only three resistors, is impossible to reduce as a combination of simple series and simple parallel sections: it is something different altogether. However, this is not the only type of circuit defying series/parallel analysis: Here we have a bridge circuit, and for the sake of example we will suppose that it is not balanced (ratio R1/R4 not equal to ratio R2/R5). If it were balanced, there would be zero current through R3, and it could be approached as a series/parallel combination circuit (R1—R4 // R2—R5). However, any current through R3makes a series/parallel analysis impossible. R1 is not in series with R4 because there’s another path for electrons to flow through R3. Neither is R2 in series with R5 for the same reason. Likewise, R1 is not in parallel with R2 because R3 is separating their bottom leads. Neither is R4 in parallel with R5. Aaarrggghhhh! Although it might not be apparent at this point, the heart of the problem is the existence of multiple unknown quantities. At least in a series/parallel combination circuit, there was a way to find total resistance and total voltage, leaving total current as a single unknown value to calculate (and then that current was used to satisfy previously unknown variables in the reduction process until the entire circuit could be analyzed). With these problems, more than one parameter (variable) is unknown at the most basic level of circuit simplification. With the two-battery circuit, there is no way to arrive at a value for “total resistance,” because there are twosources of power to provide voltage and current (we would need two “total” resistances in order to proceed with any Ohm’s Law calculations). With the unbalanced bridge circuit, there is such a thing as total resistance across the one battery (paving the way for a calculation of total current), but that total current immediately splits up into unknown proportions at each end of the bridge, so no further Ohm’s Law calculations for voltage (E=IR) can be carried out. So what can we do when we’re faced with multiple unknowns in a circuit? The answer is initially found in a mathematical process known as simultaneous equations or systems of equations, whereby multiple unknown variables are solved by relating them to each other in multiple equations. In a scenario with only one unknown (such as every Ohm’s Law equation we’ve dealt with thus far), there only needs to be a single equation to solve for the single unknown: However, when we’re solving for multiple unknown values, we need to have the same number of equations as we have unknowns in order to reach a solution. There are several methods of solving simultaneous equations, all rather intimidating and all too complex for explanation in this chapter. However, many scientific and programmable calculators are able to solve for simultaneous unknowns, so it is recommended to use such a calculator when first learning how to analyze these circuits. This is not as scary as it may seem at first. Trust me! Later on we’ll see that some clever people have found tricks to avoid having to use simultaneous equations on these types of circuits. We call these tricks network theorems, and we will explore a few later in this chapter. Review • Some circuit configurations (“networks”) cannot be solved by reduction according to series/parallel circuit rules, due to multiple unknown values. • Mathematical techniques to solve for multiple unknowns (called “simultaneous equations” or “systems”) can be applied to basic Laws of circuits to solve networks.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/10%3A_DC_Network_Analysis/10.01%3A_What_is_Network_Analysis%3F.txt
Let’s use this circuit to illustrate the method: The first step is to choose a node (junction of wires) in the circuit to use as a point of reference for our unknown currents. I’ll choose the node joining the right of R1, the top of R2, and the left of R3. At this node, guess which directions the three wires’ currents take, labeling the three currents as I1, I2, and I3, respectively. Bear in mind that these directions of current are speculative at this point. Fortunately, if it turns out that any of our guesses were wrong, we will know when we mathematically solve for the currents (any “wrong” current directions will show up as negative numbers in our solution). Kirchhoff’s Current Law (KCL) tells us that the algebraic sum of currents entering and exiting a node must equal zero, so we can relate these three currents (I1, I2, and I3) to each other in a single equation. For the sake of convention, I’ll denote any current entering the node as positive in sign, and any current exiting the node as negative in sign: The next step is to label all voltage drop polarities across resistors according to the assumed directions of the currents. Remember that the “upstream” end of a resistor will always be negative, and the “downstream” end of a resistor positive with respect to each other, since electrons are negatively charged: The battery polarities, of course, remain as they were according to their symbology (short end negative, long end positive). It is OK if the polarity of a resistor’s voltage drop doesn’t match with the polarity of the nearest battery, so long as the resistor voltage polarity is correctly based on the assumed direction of current through it. In some cases we may discover that current will be forced backwards through a battery, causing this very effect. The important thing to remember here is to base all your resistor polarities and subsequent calculations on the directions of current(s) initially assumed. As stated earlier, if your assumption happens to be incorrect, it will be apparent once the equations have been solved (by means of a negative solution). The magnitude of the solution, however, will still be correct. Kirchhoff’s Voltage Law (KVL) tells us that the algebraic sum of all voltages in a loop must equal zero, so we can create more equations with current terms (I1, I2, and I3) for our simultaneous equations. To obtain a KVL equation, we must tally voltage drops in a loop of the circuit, as though we were measuring with a real voltmeter. I’ll choose to trace the left loop of this circuit first, starting from the upper-left corner and moving counter-clockwise (the choice of starting points and directions is arbitrary). The result will look like this: Having completed our trace of the left loop, we add these voltage indications together for a sum of zero: Of course, we don’t yet know what the voltage is across R1 or R2, so we can’t insert those values into the equation as numerical figures at this point. However, we do know that all three voltages must algebraically add to zero, so the equation is true. We can go a step further and express the unknown voltages as the product of the corresponding unknown currents (I1 and I2) and their respective resistors, following Ohm’s Law (E=IR), as well as eliminate the 0 term: Since we know what the values of all the resistors are in ohms, we can just substitute those figures into the equation to simplify things a bit: You might be wondering why we went through all the trouble of manipulating this equation from its initial form (-28 + ER2 + ER1). After all, the last two terms are still unknown, so what advantage is there to expressing them in terms of unknown voltages or as unknown currents (multiplied by resistances)? The purpose in doing this is to get the KVL equation expressed using the same unknown variables as the KCL equation, for this is a necessary requirement for any simultaneous equation solution method. To solve for three unknown currents (I1, I2, and I3), we must have three equations relating these three currents (not voltages!) together. Applying the same steps to the right loop of the circuit (starting at the chosen node and moving counter-clockwise), we get another KVL equation: Knowing now that the voltage across each resistor can be and should be expressed as the product of the corresponding current and the (known) resistance of each resistor, we can re-write the equation as such: Now we have a mathematical system of three equations (one KCL equation and two KVL equations) and three unknowns: For some methods of solution (especially any method involving a calculator), it is helpful to express each unknown term in each equation, with any constant value to the right of the equal sign, and with any “unity” terms expressed with an explicit coefficient of 1. Re-writing the equations again, we have: Using whatever solution techniques are available to us, we should arrive at a solution for the three unknown current values: So, I1 is 5 amps, I2 is 4 amps, and I3 is a negative 1 amp. But what does “negative” current mean? In this case, it means that our assumed direction for I3 was opposite of its real direction. Going back to our original circuit, we can re-draw the current arrow for I3 (and re-draw the polarity of R3‘s voltage drop to match): Notice how current is being pushed backwards through battery 2 (electrons flowing “up”) due to the higher voltage of battery 1 (whose current is pointed “down” as it normally would)! Despite the fact that battery B2‘s polarity is trying to push electrons down in that branch of the circuit, electrons are being forced backwards through it due to the superior voltage of battery B1. Does this mean that the stronger battery will always “win” and the weaker battery always get current forced through it backwards? No! It actually depends on both the batteries’ relative voltages and the resistor values in the circuit. The only sure way to determine what’s going on is to take the time to mathematically analyze the network. Now that we know the magnitude of all currents in this circuit, we can calculate voltage drops across all resistors with Ohm’s Law (E=IR): Let us now analyze this network using SPICE to verify our voltage figures. We could analyze current as well with SPICE, but since that requires the insertion of extra components into the circuit, and because we know that if the voltages are all the same and all the resistances are the same, the currents must all be the same, I’ll opt for the less complex analysis. Here’s a re-drawing of our circuit, complete with node numbers for SPICE to reference: Sure enough, the voltage figures all turn out to be the same: 20 volts across R1 (nodes 1 and 2), 8 volts across R2 (nodes 2 and 0), and 1 volt across R3 (nodes 2 and 3). Take note of the signs of all these voltage figures: they’re all positive values! SPICE bases its polarities on the order in which nodes are listed, the first node being positive and the second node negative. For example, a figure of positive (+) 20 volts between nodes 1 and 2 means that node 1 is positive with respect to node 2. If the figure had come out negative in the SPICE analysis, we would have known that our actual polarity was “backwards” (node 1 negative with respect to node 2). Checking the node orders in the SPICE listing, we can see that the polarities all match what we determined through the Branch Current method of analysis. Review • Steps to follow for the “Branch Current” method of analysis: (1) Choose a node and assume directions of currents. (2) Write a KCL equation relating currents at the node. (3) Label resistor voltage drop polarities based on assumed currents. (4) Write KVL equations for each loop of the circuit, substituting the product IR for E in each resistor term of the equations. (5) Solve for unknown branch currents (simultaneous equations). (6) If any solution is negative, then the assumed direction of current for that solution is wrong! (7) Solve for voltage drops across all resistors (E=IR).
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/10%3A_DC_Network_Analysis/10.02%3A_Branch_Current_Method.txt
Mesh Current, Conventional Method Let’s see how this method works on the same example problem: Identifying Loops in a Circuit The first step in the Mesh Current method is to identify “loops” within the circuit encompassing all components. In our example circuit, the loop formed by B1, R1, and R2 will be the first while the loop formed by B2, R2, and R3 will be the second. The strangest part of the Mesh Current method is envisioning circulating currents in each of the loops. In fact, this method gets its name from the idea of these currents meshing together between loops like sets of spinning gears: The choice of each current’s direction is entirely arbitrary, just as in the Branch Current method, but the resulting equations are easier to solve if the currents are going the same direction through intersecting components (note how currents I1 and I2 are both going “up” through resistor R2, where they “mesh,” or intersect). If the assumed direction of a mesh current is wrong, the answer for that current will have a negative value. Label the Voltage Drop Polarities The next step is to label all voltage drop polarities across resistors according to the assumed directions of the mesh currents. Remember that the “upstream” end of a resistor will always be negative, and the “downstream” end of a resistor positive with respect to each other, since electrons are negatively charged. The battery polarities, of course, are dictated by their symbol orientations in the diagram, and may or may not “agree” with the resistor polarities (assumed current directions): Using Kirchhoff’s Voltage Law, we can now step around each of these loops, generating equations representative of the component voltage drops and polarities. As with the Branch Current method, we will denote a resistor’s voltage drop as the product of the resistance (in ohms) and its respective mesh current (that quantity being unknown at this point). Where two currents mesh together, we will write that term in the equation with resistor current being the sum of the two meshing currents. Tracing the Left Loop of the Circuit with Equations Tracing the left loop of the circuit, starting from the upper-left corner and moving counter-clockwise (the choice of starting points and directions is ultimately irrelevant), counting polarity as if we had a voltmeter in hand, red lead on the point ahead and black lead on the point behind, we get this equation: Notice that the middle term of the equation uses the sum of mesh currents I1 and I2 as the current through resistor R2. This is because mesh currents I1 and I2 are going the same direction through R2, and thus complement each other. Distributing the coefficient of 2 to the I1 and I2 terms, and then combining I1 terms in the equation, we can simplify as such: At this time we have one equation with two unknowns. To be able to solve for two unknown mesh currents, we must have two equations. If we trace the other loop of the circuit, we can obtain another KVL equation and have enough data to solve for the two currents. Creature of habit that I am, I’ll start at the upper-left hand corner of the right loop and trace counter-clockwise: Simplifying the equation as before, we end up with: Now, with two equations, we can use one of several methods to mathematically solve for the unknown currents I1 and I2: Knowing that these solutions are values for mesh currents, not branch currents, we must go back to our diagram to see how they fit together to give currents through all components: The solution of -1 amp for I2 means that our initially assumed direction of current was incorrect. In actuality, I2 is flowing in a counter-clockwise direction at a value of (positive) 1 amp: This change of current direction from what was first assumed will alter the polarity of the voltage drops across R2 and R3 due to current I2. From here, we can say that the current through R1 is 5 amps, with the voltage drop across R1 being the product of current and resistance (E=IR), 20 volts (positive on the left and negative on the right). Also, we can safely say that the current through R3 is 1 amp, with a voltage drop of 1 volt (E=IR), positive on the left and negative on the right. But what is happening at R2? Mesh current I1 is going “up” through R2, while mesh current I2 is going “down” through R2. To determine the actual current through R2, we must see how mesh currents I1 and I2 interact (in this case they’re in opposition), and algebraically add them to arrive at a final value. Since I1 is going “up” at 5 amps, and I2 is going “down” at 1 amp, the real current through R2 must be a value of 4 amps, going “up:” A current of 4 amps through R2‘s resistance of 2 Ω gives us a voltage drop of 8 volts (E=IR), positive on the top and negative on the bottom. Advantage of Mesh Current Analysis The primary advantage of Mesh Current analysis is that it generally allows for the solution of a large network with fewer unknown values and fewer simultaneous equations. Our example problem took three equations to solve the Branch Current method and only two equations using the Mesh Current method. This advantage is much greater as networks increase in complexity: To solve this network using Branch Currents, we’d have to establish five variables to account for each and every unique current in the circuit (I1 through I5). This would require five equations for solution, in the form of two KCL equations and three KVL equations (two equations for KCL at the nodes, and three equations for KVL in each loop): I suppose if you have nothing better to do with your time than to solve for five unknown variables with five equations, you might not mind using the Branch Current method of analysis for this circuit. For those of us who have better things to do with our time, the Mesh Current method is a whole lot easier, requiring only three unknowns and three equations to solve: Less equations to work with is a decided advantage, especially when performing simultaneous equation solution by hand (without a calculator). Another type of circuit that lends itself well to Mesh Current is the unbalanced Wheatstone Bridge. Take this circuit, for example: Since the ratios of R1/R4 and R2/R5 are unequal, we know that there will be voltage across resistor R3, and some amount of current through it. As discussed at the beginning of this chapter, this type of circuit is irreducible by normal series-parallel analysis, and may only be analyzed by some other method. We could apply the Branch Current method to this circuit, but it would require six currents (I1 through I6), leading to a very large set of simultaneous equations to solve. Using the Mesh Current method, though, we may solve for all currents and voltages with much fewer variables. The first step in the Mesh Current method is to draw just enough mesh currents to account for all components in the circuit. Looking at our bridge circuit, it should be obvious where to place two of these currents: The directions of these mesh currents, of course, is arbitrary. However, two mesh currents is not enough in this circuit, because neither I1 nor I2 goes through the battery. So, we must add a third mesh current, I3: Here, I have chosen I3 to loop from the bottom side of the battery, through R4, through R1, and back to the top side of the battery. This is not the only path I could have chosen for I3, but it seems the simplest. Now, we must label the resistor voltage drop polarities, following each of the assumed currents’ directions: Notice something very important here: at resistor R4, the polarities for the respective mesh currents do not agree. This is because those mesh currents (I2 and I3) are going through R4 in different directions. This does not preclude the use of the Mesh Current method of analysis, but it does complicate it a bit. Though later, we will show how to avoid the R4 current clash. (See Example below) Generating a KVL equation for the top loop of the bridge, starting from the top node and tracing in a clockwise direction: In this equation, we represent the common directions of currents by their sums through common resistors. For example, resistor R3, with a value of 100 Ω, has its voltage drop represented in the above KVL equation by the expression 100(I1 + I2), since both currents I1 and I2 go through R3 from right to left. The same may be said for resistor R1, with its voltage drop expression shown as 150(I1 + I3), since both I1 and I3 go from bottom to top through that resistor, and thus work together to generate its voltage drop. Generating a KVL equation for the bottom loop of the bridge will not be so easy, since we have two currents going against each other through resistor R4. Here is how I do it (starting at the right-hand node, and tracing counter-clockwise): Note how the second term in the equation’s original form has resistor R4‘s value of 300 Ω multiplied by the difference between I2 and I3 (I2 - I3). This is how we represent the combined effect of two mesh currents going in opposite directions through the same component. Choosing the appropriate mathematical signs is very important here: 300(I2 - I3) does not mean the same thing as 300(I3 - I2). I chose to write 300(I2 - I3) because I was thinking first of I2‘s effect (creating a positive voltage drop, measuring with an imaginary voltmeter across R4, red lead on the bottom and black lead on the top), and secondarily of I3‘s effect (creating a negative voltage drop, red lead on the bottom and black lead on the top). If I had thought in terms of I3‘s effect first and I2‘s effect secondarily, holding my imaginary voltmeter leads in the same positions (red on bottom and black on top), the expression would have been -300(I3 - I2). Note that this expression is mathematically equivalent to the first one: +300(I2 - I3). Well, that takes care of two equations, but I still need a third equation to complete my simultaneous equation set of three variables, three equations. This third equation must also include the battery’s voltage, which up to this point does not appear in either two of the previous KVL equations. To generate this equation, I will trace a loop again with my imaginary voltmeter starting from the battery’s bottom (negative) terminal, stepping clockwise (again, the direction in which I step is arbitrary, and does not need to be the same as the direction of the mesh current in that loop): Solving for I1, I2, and I3 using whatever simultaneous equation method we prefer: Example: Use Octave to find the solution for I1, I2, and I3 from the above simplified form of equations. Solution: In Octave, an open source Matlab® clone, enter the coefficients into the A matrix between square brackets with column elements comma separated, and rows semicolon separated.Enter the voltages into the column vector: b. The unknown currents: I1, 2, and I3 are calculated by the command: x=A\b. These are contained within the x column vector. The negative value arrived at for I1 tells us that the assumed direction for that mesh current was incorrect. Thus, the actual current values through each resistor is as such: Calculating voltage drops across each resistor: A SPICE simulation confirms the accuracy of our voltage calculations: Example: (a) Find a new path for current I3 that does not produce a conflicting polarity on any resistor compared to I1or I2. R4 was the offending component. (b) Find values for I1, I2, and I3. (c) Find the five resistor currents and compare to the previous values. Solution: (a) Route I3 through R5, R3 and R1 as shown: Note that the conflicting polarity on R4 has been removed. Moreover, none of the other resistors have conflicting polarities. (b) Octave, an open source (free) matlab clone, yields a mesh current vector at “x”: Not all currents I1, I2, and I3 are the same (I2) as the previous bridge because of different loop paths However, the resistor currents compare to the previous values: Since the resistor currents are the same as the previous values, the resistor voltages will be identical and need not be calculated again. Review • Steps to follow for the “Mesh Current” method of analysis: (1) Draw mesh currents in loops of circuit, enough to account for all components. (2) Label resistor voltage drop polarities based on assumed directions of mesh currents. (3) Write KVL equations for each loop of the circuit, substituting the product IR for E in each resistor term of the equation. Where two mesh currents intersect through a component, express the current as the algebraic sum of those two mesh currents (i.e. I1 + I2) if the currents go in the same direction through that component. If not, express the current as the difference (i.e. I1 - I2). (4) Solve for unknown mesh currents (simultaneous equations). (5) If any solution is negative, then the assumed current direction is wrong! (6) Algebraically add mesh currents to find current in components sharing multiple mesh currents. (7) Solve for voltage drops across all resistors (E=IR). Mesh Current by Inspection We take a second look at the “mesh current method” with all the currents running counterclockwise (ccw). The motivation is to simplify the writing of mesh equations by ignoring the resistor voltage drop polarity. Though, we must pay attention to the polarity of voltage sources with respect to assumed current direction. The sign of the resistor voltage drops will follow a fixed pattern. If we write a set of conventional mesh-current equations for the circuit below, where we do pay attention to the signs of the voltage drop across the resistors, we may rearrange the coefficients into a fixed pattern: Once rearranged, we may write equations by inspection. The signs of the coefficients follow a fixed pattern in the pair above, or the set of three in the rules below. Mesh Current Rules: This method assumes electron flow (not conventional current flow) voltage sources. Replace any current source in parallel with a resistor with an equivalent voltage source in series with an equivalent resistance. Ignoring current direction or voltage polarity on resistors, draw counterclockwise current loops traversing all components. Avoid nested loops. Write voltage-law equations in terms of unknown currents: I1, I2, and I3. Equation 1 coefficient 1, equation 2, coefficient 2, and equation 3 coefficient 3 are the positive sums of resistors around the respective loops. All other coefficients are negative, representative of the resistance common to a pair of loops. Equation 1 coefficient 2 is the resistor common to loops 1 and 2, coefficient 3 the resistor common to loops 1 an 3. Repeat for other equations and coefficients. The right-hand side of the equations is equal to any electron current flow voltage source. A voltage rise with respect to the counterclockwise assumed current is positive, and 0 for no voltage source. Solve equations for mesh currents:I1, I2, and I3 . Solve for currents through individual resistors with KCL. Solve for voltages with Ohms Law and KVL. While the above rules are specific for a three mesh circuit, the rules may be extended to smaller or larger meshes. The figure below illustrates the application of the rules. The three currents are all drawn in the same direction, counterclockwise. One KVL equation is written for each of the three loops. Note that there is no polarity drawn on the resistors. We do not need it to determine the signs of the coefficients. Though we do need to pay attention to the polarity of the voltage source with respect to current direction. The I3counterclockwise current traverses the 24V source from (+) to (-). This is a voltage rise for electron current flow. Therefore, the third equation right-hand side is +24V. In Octave, enter the coefficients into the A matrix with column elements comma separated, and rows semicolon separated. Enter the voltages into the column vector b. Solve for the unknown currents: I1, I2, and I3 with the command: x=A\b. These currents are contained within the x column vector. The positive values indicate that the three mesh currents all flow in the assumed counterclockwise direction. The mesh currents match the previous solution by a different mesh current method.. The calculation of resistor voltages and currents will be identical to the previous solution. No need to repeat here. Note that electrical engineering texts are based on conventional current flow. The loop-current, mesh-current method in those text will run the assumed mesh currents clockwise.The conventional current flows out the (+) terminal of the battery through the circuit, returning to the (-) terminal. A conventional current voltage rise corresponds to tracing the assumed current from (-) to (+) through any voltage sources. One more example of a previous circuit follows. The resistance around loop 1 is 6 Ω, around loop 2: 3 Ω. The resistance common to both loops is 2 Ω. Note the coefficients of I1 and I2 in the pair of equations. Tracing the assumed counterclockwise loop 1 current through B1 from (+) to (-) corresponds to an electron current flow voltage rise. Thus, the sign of the 28 V is positive. The loop 2 counter clockwise assumed current traces (-) to (+) through B2, a voltage drop. Thus, the sign of B2 is negative, -7 in the 2nd mesh equation. Once again, there are no polarity markings on the resistors. Nor do they figure into the equations. The currents I1 = 5 A, and I2 = 1 A are both positive. They both flow in the direction of the counterclockwise loops. This compares with previous results. Summary: • The modified mesh-current method avoids having to determine the signs of the equation coefficients by drawing all mesh currents counterclockwise for electron current flow. • However, we do need to determine the sign of any voltage sources in the loop. The voltage source is positive if the assumed ccw current flows with the battery (source). The sign is negative if the assumed ccw current flows against the battery. • See rules above for details.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/10%3A_DC_Network_Analysis/10.03%3A_Mesh_Current_Method_and_Analysis.txt
Method for Node Voltage Calculation We start with a circuit having conventional voltage sources. A common node E0 is chosen as a reference point. The node voltages E1 and E2 are calculated with respect to this point. A voltage source in series with a resistance must be replaced by an equivalent current source in parallel with the resistance. We will write KCL equations for each node. The right hand side of the equation is the value of the current source feeding the node. Replacing voltage sources and associated series resistors with equivalent current sources and parallel resistors yields the modified circuit. Substitute resistor conductances in siemens for resistance in ohms. The parallel conductances (resistors) may be combined by addition of the conductances. Though, we will not redraw the circuit. The circuit is ready for application of the node voltage method. Deriving a general node voltage method, we write a pair of KCL equations in terms of unknown node voltages V1 and V2 this one time. We do this to illustrate a pattern for writing equations by inspection. The coefficients of the last pair of equations above have been rearranged to show a pattern. The sum of conductances connected to the first node is the positive coefficient of the first voltage in equation (1). The sum of conductances connected to the second node is the positive coefficient of the second voltage in equation (2). The other coefficients are negative, representing conductances between nodes. For both equations, the right hand side is equal to the respective current source connected to the node. This pattern allows us to quickly write the equations by inspection. This leads to a set of rules for the node voltage method of analysis. Node Voltage Rules: • Convert voltage sources in series with a resistor to an equivalent current source with the resistor in parallel. • Change resistor values to conductances. • Select a reference node(E0) • Assign unknown voltages (E1)(E2) ... (EN)to remaining nodes. • Write a KCL equation for each node 1,2, ... N. The positive coefficient of the first voltage in the first equation is the sum of conductances connected to the node. The coefficient for the second voltage in the second equation is the sum of conductances connected to that node. Repeat for coefficient of third voltage, third equation, and other equations. These coefficients fall on a diagonal. • All other coefficients for all equations are negative, representing conductances between nodes. The first equation, second coefficient is the conductance from node 1 to node 2, the third coefficient is the conductance from node 1 to node 3. Fill in negative coefficients for other equations. • The right hand side of the equations is the current source connected to the respective nodes. • Solve system of equations for unknown node voltages. Example: Set up the equations and solve for the node voltages using the numerical values in the above figure. Solution: The solution of two equations can be performed with a calculator, or with octave (not shown).[octav] The solution is verified with SPICE based on the original schematic diagram with voltage sources.[spi] .Though, the circuit with the current sources could have been simulated. One more example. This one has three nodes. We do not list the conductances on the schematic diagram. However, G1 = 1/R1, etc. There are three nodes to write equations for by inspection. Note that the coefficients are positive for equation (1) E1, equation (2) E2, and equation (3) E3. These are the sums of all conductances connected to the nodes. All other coefficients are negative, representing a conductance between nodes. The right hand side of the equations is the associated current source, 0.136092 A for the only current source at node 1. The other equations are zero on the right hand side for lack of current sources. We are too lazy to calculate the conductances for the resistors on the diagram. Thus, the subscripted G’s are the coefficients. We are so lazy that we enter reciprocal resistances and sums of reciprocal resistances into the octave “A” matrix, letting octave compute the matrix of conductances after “A=”. [octav] The initial entry line was so long that it was split into three rows. This is different than previous examples. The entered “A” matrix is delineated by starting and ending square brackets. Column elements are space separated. Rows are “new line” separated. Commas and semicolons are not need as separators. Though, the current vector at “b” is semicolon separated to yield a column vector of currents. Note that the “A” matrix diagonal coefficients are positive, That all other coefficients are negative. The solution as a voltage vector is at “x”. E1 = 24.000 V, E2 = 17.655 V, E3 = 19.310 V. These three voltages compare to the previous mesh current and SPICE solutions to the unbalanced bridge problem. This is no coincidence, for the 0.13609 A current source was purposely chosen to yield the 24 V used as a voltage source in that problem. Summary • Given a network of conductances and current sources, the node voltage method of circuit analysis solves for unknown node voltages from KCL equations. • See rules above for details in writing the equations by inspection.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/10%3A_DC_Network_Analysis/10.04%3A_Node_Voltage_Method.txt
Anyone who’s studied geometry should be familiar with the concept of a theorem: a relatively simple rule used to solve a problem, derived from a more intensive analysis using fundamental rules of mathematics. At least hypothetically, any problem in math can be solved just by using the simple rules of arithmetic (in fact, this is how modern digital computers carry out the most complex mathematical calculations: by repeating many cycles of additions and subtractions!), but human beings aren’t as consistent or as fast as a digital computer. We need “shortcut” methods in order to avoid procedural errors. In electric network analysis, the fundamental rules are Ohm’s Law and Kirchhoff’s Laws. While these humble laws may be applied to analyze just about any circuit configuration (even if we have to resort to complex algebra to handle multiple unknowns), there are some “shortcut” methods of analysis to make the math easier for the average human. As with any theorem of geometry or algebra, these network theorems are derived from fundamental rules. In this chapter, I’m not going to delve into the formal proofs of any of these theorems. If you doubt their validity, you can always empirically test them by setting up example circuits and calculating values using the “old” (simultaneous equation) methods versus the “new” theorems, to see if the answers coincide. They always should! 10.06: Millman’s Theorem In Millman’s Theorem, the circuit is re-drawn as a parallel network of branches, each branch containing a resistor or series battery/resistor combination. Millman’s Theorem is applicable only to those circuits which can be re-drawn accordingly. Here again is our example circuit used for the last two analysis methods: And here is that same circuit, re-drawn for the sake of applying Millman’s Theorem: By considering the supply voltage within each branch and the resistance within each branch, Millman’s Theorem will tell us the voltage across all branches. Please note that I’ve labeled the battery in the rightmost branch as “B3” to clearly denote it as being in the third branch, even though there is no “B2” in the circuit! Millman’s Theorem is nothing more than a long equation, applied to any circuit drawn as a set of parallel-connected branches, each branch with its own voltage source and series resistance: Substituting actual voltage and resistance figures from our example circuit for the variable terms of this equation, we get the following expression: The final answer of 8 volts is the voltage seen across all parallel branches, like this: The polarity of all voltages in Millman’s Theorem are referenced to the same point. In the example circuit above, I used the bottom wire of the parallel circuit as my reference point, and so the voltages within each branch (28 for the R1 branch, 0 for the R2 branch, and 7 for the R3 branch) were inserted into the equation as positive numbers. Likewise, when the answer came out to 8 volts (positive), this meant that the top wire of the circuit was positive with respect to the bottom wire (the original point of reference). If both batteries had been connected backwards (negative ends up and positive ends down), the voltage for branch 1 would have been entered into the equation as a -28 volts, the voltage for branch 3 as -7 volts, and the resulting answer of -8 volts would have told us that the top wire was negative with respect to the bottom wire (our initial point of reference). To solve for resistor voltage drops, the Millman voltage (across the parallel network) must be compared against the voltage source within each branch, using the principle of voltages adding in series to determine the magnitude and polarity of voltage across each resistor: To solve for branch currents, each resistor voltage drop can be divided by its respective resistance (I=E/R): The direction of current through each resistor is determined by the polarity across each resistor, not by the polarity across each battery, as current can be forced backwards through a battery, as is the case with B3 in the example circuit. This is important to keep in mind, since Millman’s Theorem doesn’t provide as direct an indication of “wrong” current direction as does the Branch Current or Mesh Current methods. You must pay close attention to the polarities of resistor voltage drops as given by Kirchhoff’s Voltage Law, determining direction of currents from that. Millman’s Theorem is very convenient for determining the voltage across a set of parallel branches, where there are enough voltage sources present to preclude solution via regular series-parallel reduction method. It also is easy in the sense that it doesn’t require the use of simultaneous equations. However, it is limited in that it only applied to circuits which can be re-drawn to fit this form. It cannot be used, for example, to solve an unbalanced bridge circuit. And, even in cases where Millman’s Theorem can be applied, the solution of individual resistor voltage drops can be a bit daunting to some, the Millman’s Theorem equation only providing a single figure for branch voltage. As you will see, each network analysis method has its own advantages and disadvantages. Each method is a tool, and there is no tool that is perfect for all jobs. The skilled technician, however, carries these methods in his or her mind like a mechanic carries a set of tools in his or her tool box. The more tools you have equipped yourself with, the better prepared you will be for any eventuality. Review • Millman’s Theorem treats circuits as a parallel set of series-component branches. • All voltages entered and solved for in Millman’s Theorem are polarity-referenced at the same point in the circuit (typically the bottom wire of the parallel network).
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/10%3A_DC_Network_Analysis/10.05%3A_Introduction_to_Network_Theorems.txt
Series/Parallel Analysis The strategy used in the Superposition Theorem is to eliminate all but one source of power within a network at a time, using series/parallel analysis to determine voltage drops (and/or currents) within the modified network for each power source separately. Then, once voltage drops and/or currents have been determined for each power source working separately, the values are all “superimposed” on top of each other (added algebraically) to find the actual voltage drops/currents with all sources active. Let’s look at our example circuit again and apply Superposition Theorem to it: Since we have two sources of power in this circuit, we will have to calculate two sets of values for voltage drops and/or currents, one for the circuit with only the 28-volt battery in effect. . . . . . and one for the circuit with only the 7-volt battery in effect: When re-drawing the circuit for series/parallel analysis with one source, all other voltage sources are replaced by wires (shorts), and all current sources with open circuits (breaks). Since we only have voltage sources (batteries) in our example circuit, we will replace every inactive source during analysis with a wire. Analyzing the circuit with only the 28-volt battery, we obtain the following values for voltage and current: Analyzing the circuit with only the 7-volt battery, we obtain another set of values for voltage and current: When superimposing these values of voltage and current, we have to be very careful to consider polarity (voltage drop) and direction (electron flow), as the values have to be added algebraically. Applying these superimposed voltage figures to the circuit, the end result looks something like this: Currents add up algebraically as well and can either be superimposed as done with the resistor voltage drops or simply calculated from the final voltage drops and respective resistances (I=E/R). Either way, the answers will be the same. Here I will show the superposition method applied to current: Once again applying these superimposed figures to our circuit: Prerequisites for the Superposition Theorem Quite simple and elegant, don’t you think? It must be noted, though, that the Superposition Theorem works only for circuits that are reducible to series/parallel combinations for each of the power sources at a time (thus, this theorem is useless for analyzing an unbalanced bridge circuit), and it only works where the underlying equations are linear (no mathematical powers or roots). The requisite of linearity means that Superposition Theorem is only applicable for determining voltage and current, not power!!! Power dissipations, being nonlinear functions, do not algebraically add to an accurate total when only one source is considered at a time. The need for linearity also means this Theorem cannot be applied in circuits where the resistance of a component changes with voltage or current. Hence, networks containing components like lamps (incandescent or gas-discharge) or varistors could not be analyzed. Another prerequisite for Superposition Theorem is that all components must be “bilateral,” meaning that they behave the same with electrons flowing in either direction through them. Resistors have no polarity-specific behavior, and so the circuits we’ve been studying so far all meet this criterion. The Superposition Theorem finds use in the study of alternating current (AC) circuits, and semiconductor (amplifier) circuits, where sometimes AC is often mixed (superimposed) with DC. Because AC voltage and current equations (Ohm’s Law) are linear just like DC, we can use Superposition to analyze the circuit with just the DC power source, then just the AC power source, combining the results to tell what will happen with both AC and DC sources in effect. For now, though, Superposition will suffice as a break from having to do simultaneous equations to analyze a circuit. Review • The Superposition Theorem states that a circuit can be analyzed with only one source of power at a time, the corresponding component voltages and currents algebraically added to find out what they’ll do with all power sources in effect. • To negate all but one power source for analysis, replace any source of voltage (batteries) with a wire; replace any current source with an open (break).
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/10%3A_DC_Network_Analysis/10.07%3A_Superposition_Theorem.txt
Thevenin’s Theorem is especially useful in analyzing power systems and other circuits where one particular resistor in the circuit (called the “load” resistor) is subject to change, and re-calculation of the circuit is necessary with each trial value of load resistance, to determine voltage across it and current through it. Let’s take another look at our example circuit: Let’s suppose that we decide to designate R2 as the “load” resistor in this circuit. We already have four methods of analysis at our disposal (Branch Current, Mesh Current, Millman’s Theorem, and Superposition Theorem) to use in determining voltage across R2 and current through R2, but each of these methods are time-consuming. Imagine repeating any of these methods over and over again to find what would happen if the load resistance changed (changing load resistance is very common in power systems, as multiple loads get switched on and off as needed. the total resistance of their parallel connections changing depending on how many are connected at a time). This could potentially involve a lot of work! Thevenin’s Theorem makes this easy by temporarily removing the load resistance from the original circuit and reducing what’s left to an equivalent circuit composed of a single voltage source and series resistance. The load resistance can then be re-connected to this “Thevenin equivalent circuit” and calculations carried out as if the whole network were nothing but a simple series circuit: after Thevenin conversion . . . The “Thevenin Equivalent Circuit” is the electrical equivalent of B1, R1, R3, and B2 as seen from the two points where our load resistor (R2) connects. The Thevenin equivalent circuit, if correctly derived, will behave exactly the same as the original circuit formed by B1, R1, R3, and B2. In other words, the load resistor (R2) voltage and current should be exactly the same for the same value of load resistance in the two circuits. The load resistor R2 cannot “tell the difference” between the original network of B1, R1, R3, and B2, and the Thevenin equivalent circuit of EThevenin, and RThevenin, provided that the values for EThevenin and RThevenin have been calculated correctly. The advantage in performing the “Thevenin conversion” to the simpler circuit, of course, is that it makes load voltage and load current so much easier to solve than in the original network. Calculating the equivalent Thevenin source voltage and series resistance is actually quite easy. First, the chosen load resistor is removed from the original circuit, replaced with a break (open circuit): Next, the voltage between the two points where the load resistor used to be attached is determined. Use whatever analysis methods are at your disposal to do this. In this case, the original circuit with the load resistor removed is nothing more than a simple series circuit with opposing batteries, and so we can determine the voltage across the open load terminals by applying the rules of series circuits, Ohm’s Law, and Kirchhoff’s Voltage Law: The voltage between the two load connection points can be figured from the one of the battery’s voltage and one of the resistor’s voltage drops, and comes out to 11.2 volts. This is our “Thevenin voltage” (EThevenin) in the equivalent circuit: To find the Thevenin series resistance for our equivalent circuit, we need to take the original circuit (with the load resistor still removed), remove the power sources (in the same style as we did with the Superposition Theorem: voltage sources replaced with wires and current sources replaced with breaks), and figure the resistance from one load terminal to the other: With the removal of the two batteries, the total resistance measured at this location is equal to R1 and R3 in parallel: 0.8 Ω. This is our “Thevenin resistance” (RThevenin) for the equivalent circuit: With the load resistor (2 Ω) attached between the connection points, we can determine voltage across it and current through it as though the whole network were nothing more than a simple series circuit: Notice that the voltage and current figures for R2 (8 volts, 4 amps) are identical to those found using other methods of analysis. Also notice that the voltage and current figures for the Thevenin series resistance and the Thevenin source (total) do not apply to any component in the original, complex circuit. Thevenin’s Theorem is only useful for determining what happens to a single resistor in a network: the load. The advantage, of course, is that you can quickly determine what would happen to that single resistor if it were of a value other than 2 Ω without having to go through a lot of analysis again. Just plug in that other value for the load resistor into the Thevenin equivalent circuit and a little bit of series circuit calculation will give you the result. Review • Thevenin’s Theorem is a way to reduce a network to an equivalent circuit composed of a single voltage source, series resistance, and series load. • Steps to follow for Thevenin’s Theorem: (1) Find the Thevenin source voltage by removing the load resistor from the original circuit and calculating voltage across the open connection points where the load resistor used to be. (2) Find the Thevenin resistance by removing all power sources in the original circuit (voltage sources shorted and current sources open) and calculating total resistance between the open connection points. (3) Draw the Thevenin equivalent circuit, with the Thevenin voltage source in series with the Thevenin resistance. The load resistor re-attaches between the two open points of the equivalent circuit. (4) Analyze voltage and current for the load resistor following the rules for series circuits.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/10%3A_DC_Network_Analysis/10.08%3A_Thevenin%E2%80%99s_Theorem.txt
What is Norton’s Theorem? Norton’s Theorem states that it is possible to simplify any linear circuit, no matter how complex, to an equivalent circuit with just a single current source and parallel resistance connected to a load. Just as with Thevenin’s Theorem, the qualification of “linear” is identical to that found in the Superposition Theorem: all underlying equations must be linear (no exponents or roots). Simplifying Linear Circuits Contrasting our original example circuit against the Norton equivalent: it looks something like this: after Norton conversion . . . Remember that a current source is a component whose job is to provide a constant amount of current, outputting as much or as little voltage necessary to maintain that constant current. Thevenin’s Theorem vs. Norton’s Theorem As with Thevenin’s Theorem, everything in the original circuit except the load resistance has been reduced to an equivalent circuit that is simpler to analyze. Also similar to Thevenin’s Theorem are the steps used in Norton’s Theorem to calculate the Norton source current (INorton) and Norton resistance (RNorton). As before, the first step is to identify the load resistance and remove it from the original circuit: Then, to find the Norton current (for the current source in the Norton equivalent circuit), place a direct wire (short) connection between the load points and determine the resultant current. Note that this step is exactly opposite the respective step in Thevenin’s Theorem, where we replaced the load resistor with a break (open circuit): With zero voltage dropped between the load resistor connection points, the current through R1 is strictly a function of B1‘s voltage and R1‘s resistance: 7 amps (I=E/R). Likewise, the current through R3 is now strictly a function of B2‘s voltage and R3‘s resistance: 7 amps (I=E/R). The total current through the short between the load connection points is the sum of these two currents: 7 amps + 7 amps = 14 amps. This figure of 14 amps becomes the Norton source current (INorton) in our equivalent circuit: Remember, the arrow notation for a current source points in the direction opposite that of electron flow. Again, apologies for the confusion. For better or for worse, this is standard electronic symbol notation. Blame Mr. Franklin again! To calculate the Norton resistance (RNorton), we do the exact same thing as we did for calculating Thevenin resistance (RThevenin): take the original circuit (with the load resistor still removed), remove the power sources (in the same style as we did with the Superposition Theorem: voltage sources replaced with wires and current sources replaced with breaks), and figure total resistance from one load connection point to the other: Now our Norton equivalent circuit looks like this: If we re-connect our original load resistance of 2 Ω, we can analyze the Norton circuit as a simple parallel arrangement: As with the Thevenin equivalent circuit, the only useful information from this analysis is the voltage and current values for R2; the rest of the information is irrelevant to the original circuit. However, the same advantages seen with Thevenin’s Theorem apply to Norton’s as well: if we wish to analyze load resistor voltage and current over several different values of load resistance, we can use the Norton equivalent circuit again and again, applying nothing more complex than simple parallel circuit analysis to determine what’s happening with each trial load. Review • Norton’s Theorem is a way to reduce a network to an equivalent circuit composed of a single current source, parallel resistance, and parallel load. • Steps to follow for Norton’s Theorem: (1) Find the Norton source current by removing the load resistor from the original circuit and calculating current through a short (wire) jumping across the open connection points where the load resistor used to be. (2) Find the Norton resistance by removing all power sources in the original circuit (voltage sources shorted and current sources open) and calculating total resistance between the open connection points. (3) Draw the Norton equivalent circuit, with the Norton current source in parallel with the Norton resistance. The load resistor re-attaches between the two open points of the equivalent circuit. (4) Analyze voltage and current for the load resistor following the rules for parallel circuits.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/10%3A_DC_Network_Analysis/10.09%3A_Norton%E2%80%99s_Theorem.txt
You may have noticed that the procedure for calculating Thevenin resistance is identical to the procedure for calculating Norton resistance: remove all power sources and determine resistance between the open load connection points. As such, Thevenin and Norton resistances for the same original network must be equal. Using the example circuits from the last two sections, we can see that the two resistances are indeed equal: Considering the fact that both Thevenin and Norton equivalent circuits are intended to behave the same as the original network in supplying voltage and current to the load resistor (as seen from the perspective of the load connection points), these two equivalent circuits, having been derived from the same original network should behave identically. This means that both Thevenin and Norton equivalent circuits should produce the same voltage across the load terminals with no load resistor attached. With the Thevenin equivalent, the open-circuited voltage would be equal to the Thevenin source voltage (no circuit current present to drop voltage across the series resistor), which is 11.2 volts in this case. With the Norton equivalent circuit, all 14 amps from the Norton current source would have to flow through the 0.8 Ω Norton resistance, producing the exact same voltage, 11.2 volts (E=IR). Thus, we can say that the Thevenin voltage is equal to the Norton current times the Norton resistance: So, if we wanted to convert a Norton equivalent circuit to a Thevenin equivalent circuit, we could use the same resistance and calculate the Thevenin voltage with Ohm’s Law. Conversely, both Thevenin and Norton equivalent circuits should generate the same amount of current through a short circuit across the load terminals. With the Norton equivalent, the short-circuit current would be exactly equal to the Norton source current, which is 14 amps in this case. With the Thevenin equivalent, all 11.2 volts would be applied across the 0.8 Ω Thevenin resistance, producing the exact same current through the short, 14 amps (I=E/R). Thus, we can say that the Norton current is equal to the Thevenin voltage divided by the Thevenin resistance: This equivalence between Thevenin and Norton circuits can be a useful tool in itself, as we shall see in the next section. Review • Thevenin and Norton resistances are equal. • Thevenin voltage is equal to Norton current times Norton resistance. • Norton current is equal to Thevenin voltage divided by Thevenin resistance. 10.11: Millman’s Theorem Revisited You may have wondered where we got that strange equation for the determination of “Millman Voltage” across parallel branches of a circuit where each branch contains a series resistance and voltage source: Parts of this equation seem familiar to equations we’ve seen before. For instance, the denominator of the large fraction looks conspicuously like the denominator of our parallel resistance equation. And, of course, the E/R terms in the numerator of the large fraction should give figures for current, Ohm’s Law being what it is (I=E/R). Now that we’ve covered Thevenin and Norton source equivalencies, we have the tools necessary to understand Millman’s equation. What Millman’s equation is actually doing is treating each branch (with its series voltage source and resistance) as a Thevenin equivalent circuit and then converting each one into equivalent Norton circuits. Thus, in the circuit above, battery B1 and resistor R1 are seen as a Thevenin source to be converted into a Norton source of 7 amps (28 volts / 4 Ω) in parallel with a 4 Ω resistor. The rightmost branch will be converted into a 7 amp current source (7 volts / 1 Ω) and 1 Ω resistor in parallel. The center branch, containing no voltage source at all, will be converted into a Norton source of 0 amps in parallel with a 2 Ω resistor: Since current sources directly add their respective currents in parallel, the total circuit current will be 7 + 0 + 7, or 14 amps. This addition of Norton source currents is what’s being represented in the numerator of the Millman equation: All the Norton resistances are in parallel with each other as well in the equivalent circuit, so they diminish to create a total resistance. This diminishing of source resistances is what’s being represented in the denominator of the Millman’s equation: In this case, the resistance total will be equal to 571.43 milliohms (571.43 mΩ). We can re-draw our equivalent circuit now as one with a single Norton current source and Norton resistance: Ohm’s Law can tell us the voltage across these two components now (E=IR): Let’s summarize what we know about the circuit thus far. We know that the total current in this circuit is given by the sum of all the branch voltages divided by their respective resistances. We also know that the total resistance is found by taking the reciprocal of all the branch resistance reciprocals. Furthermore, we should be well aware of the fact that total voltage across all the branches can be found by multiplying total current by total resistance (E=IR). All we need to do is put together the two equations we had earlier for total circuit current and total resistance, multiplying them to find total voltage: The Millman’s equation is nothing more than a Thevenin-to-Norton conversion matched together with the parallel resistance formula to find total voltage across all the branches of the circuit. So, hopefully some of the mystery is gone now!
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_I_-_Direct_Current_(Kuphaldt)/10%3A_DC_Network_Analysis/10.10%3A_Thevenin-Norton_Equivalencies.txt