chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
What is Split-Phase Power Systems? Split-phase power systems achieve their high conductor efficiency and low safety risk by splitting up the total voltage into lesser parts and powering multiple loads at those lesser voltages while drawing currents at levels typical of a full-voltage system. This technique, by the way, works just as well for DC power systems as it does for single-phase AC systems. Such systems are usually referred to as three-wire systems rather than split-phase because “phase” is a concept restricted to AC. But we know from our experience with vectors and complex numbers that AC voltages don’t always add up as we think they would if they are out of phase with each other. This principle, applied to power systems, can be put to use to make power systems with even greater conductor efficiencies and lower shock hazard than with split-phase. Examples Suppose that we had two sources of AC voltage connected in series just like the split-phase system we saw before, except that each voltage source was 120o out of phase with the other: (Figure below) Pair of 120 Vac sources phased 120o, similar to split-phase. Since each voltage source is 120 volts, and each load resistor is connected directly in parallel with its respective source, the voltage across each load must be 120 volts as well. Given load currents of 83.33 amps, each load must still be dissipating 10 kilowatts of power. However, voltage between the two “hot” wires is not 240 volts (120 ∠ 0o - 120 ∠ 180o) because the phase difference between the two sources is not 180o. Instead, the voltage is: Nominally, we say that the voltage between “hot” conductors is 208 volts (rounding up), and thus the power system voltage is designated as 120/208. If we calculate the current through the “neutral” conductor, we find that it is not zero, even with balanced load resistances. Kirchhoff’s Current Law tells us that the currents entering and exiting the node between the two loads must be zero: (Figure below) Neutral wire carries a current in the case of a pair of 120o phased sources. So, we find that the “neutral” wire is carrying a full 83.33 amps, just like each “hot” wire. Note that we are still conveying 20 kW of total power to the two loads, with each load’s “hot” wire carrying 83.33 amps as before. With the same amount of current through each “hot” wire, we must use the same gauge copper conductors, so we haven’t reduced system cost over the split-phase 120/240 system. However, we have realized a gain in safety, because the overall voltage between the two “hot” conductors is 32 volts lower than it was in the split-phase system (208 volts instead of 240 volts). The fact that the neutral wire is carrying 83.33 amps of current raises an interesting possibility: since its carrying current anyway, why not use that third wire as another “hot” conductor, powering another load resistor with a third 120 volt source having a phase angle of 240o? That way, we could transmit more power (another 10 kW) without having to add any more conductors. Let’s see how this might look: (Figure below) With a third load phased 120o to the other two, the currents are the same as for two loads. A full mathematical analysis of all the voltages and currents in this circuit would necessitate the use of a network theorem, the easiest being the Superposition Theorem. I’ll spare you the long, drawn-out calculations because you should be able to intuitively understand that the three voltage sources at three different phase angles will deliver 120 volts each to a balanced triad of load resistors. For proof of this, we can use SPICE to do the math for us: (Figure below, SPICE listing: 120/208 polyphase power system) SPICE circuit: Three 3-Φ loads phased at 120o. Sure enough, we get 120 volts across each load resistor, with (approximately) 208 volts between any two “hot” conductors and conductor currents equal to 83.33 amps. (Figure below) At that current and voltage, each load will be dissipating 10 kW of power. Notice that this circuit has no “neutral” conductor to ensure stable voltage to all loads if one should open. What we have here is a situation similar to our split-phase power circuit with no “neutral” conductor: if one load should happen to fail open, the voltage drops across the remaining load(s) will change. To ensure load voltage stability in the event of another load opening, we need a neutral wire to connect the source node and load node together: SPICE circuit annotated with simulation results: Three 3-Φ loads phased at 120o. So long as the loads remain balanced (equal resistance, equal currents), the neutral wire will not have to carry any current at all. It is there just in case one or more load resistors should fail open (or be shut off through a disconnecting switch). This circuit we’ve been analyzing with three voltage sources is called a polyphase circuit. The prefix “poly” simply means “more than one,” as in “polytheism” (belief in more than one deity), “polygon” (a geometrical shape made of multiple line segments: for example, pentagon and hexagon), and “polyatomic” (a substance composed of multiple types of atoms). Since the voltage sources are all at different phase angles (in this case, three different phase angles), this is a “polyphase” circuit. More specifically, it is a three-phase circuit, the kind used predominantly in large power distribution systems. Let’s survey the advantages of a three-phase power system over a single-phase system of equivalent load voltage and power capacity. A single-phase system with three loads connected directly in parallel would have a very high total current (83.33 times 3, or 250 amps. (Figure below) For comparison, three 10 Kw loads on a 120 Vac system draw 250 A. This would necessitate 3/0 gage copper wire (very large!), at about 510 pounds per thousand feet, and with a considerable price tag attached. If the distance from source to load was 1000 feet, we would need over a half-ton of copper wire to do the job. On the other hand, we could build a split-phase system with two 15 kW, 120 volt loads. (Figure below) Split phase system draws half the current of 125 A at 240 Vac compared to 120 Vac system. Our current is half of what it was with the simple parallel circuit, which is a great improvement. We could get away with using number 2 gauge copper wire at a total mass of about 600 pounds, figuring about 200 pounds per thousand feet with three runs of 1000 feet each between source and loads. However, we also have to consider the increased safety hazard of having 240 volts present in the system, even though each load only receives 120 volts. Overall, there is greater potential for a dangerous electric shock to occur. When we contrast these two examples against our three-phase system (Figure above), the advantages are quite clear. First, the conductor currents are quite a bit less (83.33 amps versus 125 or 250 amps), permitting the use of much thinner and lighter wire. We can use number 4 gauge wire at about 125 pounds per thousand feet, which will total 500 pounds (four runs of 1000 feet each) for our example circuit. This represents significant cost savings over the split-phase system, with the additional benefit that the maximum voltage in the system is lower (208 versus 240). One question remains to be answered: how in the world do we get three AC voltage sources whose phase angles are exactly 120o apart? Obviously we can’t center-tap a transformer or alternator winding like we did in the split-phase system, since that can only give us voltage waveforms that are either in phase or 180o out of phase. Perhaps we could figure out some way to use capacitors and inductors to create phase shifts of 120o, but then those phase shifts would depend on the phase angles of our load impedances as well (substituting a capacitive or inductive load for a resistive load would change everything!). The best way to get the phase shifts we’re looking for is to generate it at the source: construct the AC generator (alternator) providing the power in such a way that the rotating magnetic field passes by three sets of wire windings, each set spaced 120o apart around the circumference of the machine as in Figure below. (a) Single-phase alternator, (b) Three-phase alternator. Together, the six “pole” windings of a three-phase alternator are connected to comprise three winding pairs, each pair producing AC voltage with a phase angle 120o shifted from either of the other two winding pairs. The interconnections between pairs of windings (as shown for the single-phase alternator: the jumper wire between windings 1a and 1b) have been omitted from the three-phase alternator drawing for simplicity. In our example circuit, we showed the three voltage sources connected together in a “Y” configuration (sometimes called the “star” configuration), with one lead of each source tied to a common point (the node where we attached the “neutral” conductor). The common way to depict this connection scheme is to draw the windings in the shape of a “Y” like Figure below. Alternator “Y” configuration. The “Y” configuration is not the only option open to us, but it is probably the easiest to understand at first. More to come on this subject later in the chapter. Review • A single-phase power system is one where there is only one AC voltage source (one source voltage waveform). • A split-phase power system is one where there are two voltage sources, 180o phase-shifted from each other, powering a two series-connected loads. The advantage of this is the ability to have lower conductor currents while maintaining low load voltages for safety reasons. • A polyphase power system uses multiple voltage sources at different phase angles from each other (many “phases” of voltage waveforms at work). A polyphase power system can deliver more power at less voltage with smaller-gage conductors than single- or split-phase systems. • The phase-shifted voltage sources necessary for a polyphase power system are created in alternators with multiple sets of wire windings. These winding sets are spaced around the circumference of the rotor’s rotation at the desired angle(s).
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/07%3A_Polyphase_AC_Circuits/7.02%3A_Three-phase_Power_Systems.txt
Let’s take the three-phase alternator design laid out earlier (Figure below) and watch what happens as the magnet rotates. Three-phase alternator The phase angle shift of 120o is a function of the actual rotational angle shift of the three pairs of windings (Figure below). If the magnet is rotating clockwise, winding 3 will generate its peak instantaneous voltage exactly 120o (of alternator shaft rotation) after winding 2, which will hits its peak 120o after winding 1. The magnet passes by each pole pair at different positions in the rotational movement of the shaft. Where we decide to place the windings will dictate the amount of phase shift between the windings’ AC voltage waveforms. If we make winding 1 our “reference” voltage source for phase angle (0o), then winding 2 will have a phase angle of -120o (120o lagging, or 240o leading) and winding 3 an angle of -240o (or 120oleading). This sequence of phase shifts has a definite order. For clockwise rotation of the shaft, the order is 1-2-3 (winding 1 peaks first, them winding 2, then winding 3). This order keeps repeating itself as long as we continue to rotate the alternator’s shaft. (Figure below) Clockwise rotation phase sequence: 1-2-3. However, if we reverse the rotation of the alternator’s shaft (turn it counter-clockwise), the magnet will pass by the pole pairs in the opposite sequence. Instead of 1-2-3, we’ll have 3-2-1. Now, winding 2’s waveform will be leading 120o ahead of 1 instead of lagging, and 3 will be another 120o ahead of 2. (Figure below) Counterclockwise rotation phase sequence: 3-2-1. The order of voltage waveform sequences in a polyphase system is called phase rotation or phase sequence. If we’re using a polyphase voltage source to power resistive loads, phase rotation will make no difference at all. Whether 1-2-3 or 3-2-1, the voltage and current magnitudes will all be the same. There are some applications of three-phase power, as we will see shortly, that depend on having phase rotation being one way or the other. Since voltmeters and ammeters would be useless in telling us what the phase rotation of an operating power system is, we need to have some other kind of instrument capable of doing the job. One ingenious circuit design uses a capacitor to introduce a phase shift between voltage and current, which is then used to detect the sequence by way of comparison between the brightness of two indicator lamps in Figure below. Phase sequence detector compares brightness of two lamps. The two lamps are of equal filament resistance and wattage. The capacitor is sized to have approximately the same amount of reactance at system frequency as each lamp’s resistance. If the capacitor were to be replaced by a resistor of equal value to the lamps’ resistance, the two lamps would glow at equal brightness, the circuit being balanced. However, the capacitor introduces a phase shift between voltage and current in the third leg of the circuit equal to 90o. This phase shift, greater than 0o but less than 120o, skews the voltage and current values across the two lamps according to their phase shifts relative to phase 3. The following SPICE analysis demonstrates what will happen: (Figure below), “phase rotation detector—sequence = v1-v2-v3” SPICE circuit for phase sequence detector. The resulting phase shift from the capacitor causes the voltage across phase 1 lamp (between nodes 1 and 4) to fall to 48.1 volts and the voltage across phase 2 lamp (between nodes 2 and 4) to rise to 179.5 volts, making the first lamp dim and the second lamp bright. Just the opposite will happen if the phase sequence is reversed: “phase rotation detector—sequence = v3-v2-v1 “ Here,(“phase rotation detector—sequence = v3-v2-v1”) the first lamp receives 179.5 volts while the second receives only 48.1 volts. We’ve investigated how phase rotation is produced (the order in which pole pairs get passed by the alternator’s rotating magnet) and how it can be changed by reversing the alternator’s shaft rotation. However, reversal of the alternator’s shaft rotation is not usually an option open to an end-user of electrical power supplied by a nationwide grid (“the” alternator actually being the combined total of all alternators in all power plants feeding the grid). There is a much easier way to reverse phase sequence than reversing alternator rotation: just exchange any two of the three “hot” wires going to a three-phase load. This trick makes more sense if we take another look at a running phase sequence of a three-phase voltage source: What is commonly designated as a “1-2-3” phase rotation could just as well be called “2-3-1” or “3-1-2,” going from left to right in the number string above. Likewise, the opposite rotation (3-2-1) could just as easily be called “2-1-3” or “1-3-2.” Starting out with a phase rotation of 3-2-1, we can try all the possibilities for swapping any two of the wires at a time and see what happens to the resulting sequence in Figure below. All possibilities of swapping any two wires. No matter which pair of “hot” wires out of the three we choose to swap, the phase rotation ends up being reversed (1-2-3 gets changed to 2-1-3, 1-3-2 or 3-2-1, all equivalent). Review • Phase rotation, or phase sequence, is the order in which the voltage waveforms of a polyphase AC source reach their respective peaks. For a three-phase system, there are only two possible phase sequences: 1-2-3 and 3-2-1, corresponding to the two possible directions of alternator rotation. • Phase rotation has no impact on resistive loads, but it will have impact on unbalanced reactive loads, as shown in the operation of a phase rotation detector circuit. • Phase rotation can be reversed by swapping any two of the three “hot” leads supplying three-phase power to a three-phase load.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/07%3A_Polyphase_AC_Circuits/7.03%3A_Phase_Rotation.txt
Perhaps the most important benefit of polyphase AC power over single-phase is the design and operation of AC motors. As we studied in the first chapter of this book, some types of AC motors are virtually identical in construction to their alternator (generator) counterparts, consisting of stationary wire windings and a rotating magnet assembly. (Other AC motor designs are not quite this simple, but we will leave those details to another lesson). Clockwise AC motor operation. If the rotating magnet is able to keep up with the frequency of the alternating current energizing the electromagnet windings (coils), it will continue to be pulled around clockwise. (Figure above) However, clockwise is not the only valid direction for this motor’s shaft to spin. It could just as easily be powered in a counter-clockwise direction by the same AC voltage waveform a in Figure below. Counterclockwise AC motor operation. Notice that with the exact same sequence of polarity cycles (voltage, current, and magnetic poles produced by the coils), the magnetic rotor can spin in either direction. This is a common trait of all single-phase AC “induction” and “synchronous” motors: they have no normal or “correct” direction of rotation. The natural question should arise at this point: how can the motor get started in the intended direction if it can run either way just as well? The answer is that these motors need a little help getting started. Once helped to spin in a particular direction. they will continue to spin that way as long as AC power is maintained to the windings. Where that “help” comes from for a single-phase AC motor to get going in one direction can vary. Usually, it comes from an additional set of windings positioned differently from the main set, and energized with an AC voltage that is out of phase with the main power. (Figure below) Unidirectional-starting AC two-phase motor. These supplementary coils are typically connected in series with a capacitor to introduce a phase shift in current between the two sets of windings. (Figure below) Capacitor phase shift adds second phase. That phase shift creates magnetic fields from coils 2a and 2b that are equally out of step with the fields from coils 1a and 1b. The result is a set of magnetic fields with a definite phase rotation. It is this phase rotation that pulls the rotating magnet around in a definite direction. Polyphase AC motors require no such trickery to spin in a definite direction. Because their supply voltage waveforms already have a definite rotation sequence, so do the respective magnetic fields generated by the motor’s stationary windings. In fact, the combination of all three phase winding sets working together creates what is often called a rotating magnetic field. It was this concept of a rotating magnetic field that inspired Nikola Tesla to design the world’s first polyphase electrical systems (simply to make simpler, more efficient motors). The line current and safety advantages of polyphase power over single phase power were discovered later. What can be a confusing concept is made much clearer through analogy. Have you ever seen a row of blinking light bulbs such as the kind used in Christmas decorations? Some strings appear to “move” in a definite direction as the bulbs alternately glow and darken in sequence. Other strings just blink on and off with no apparent motion. What makes the difference between the two types of bulb strings? Answer: phase shift! Examine a string of lights where every other bulb is lit at any given time as in (Figure below) Phase sequence 1-2-1-2: lamps appear to move. When all of the “1” bulbs are lit, the “2” bulbs are dark, and vice versa. With this blinking sequence, there is no definite “motion” to the bulbs’ light. Your eyes could follow a “motion” from left to right just as easily as from right to left. Technically, the “1” and “2” bulb blinking sequences are 180o out of phase (exactly opposite each other). This is analogous to the single-phase AC motor, which can run just as easily in either direction, but which cannot start on its own because its magnetic field alternation lacks a definite “rotation.” Now let’s examine a string of lights where there are three sets of bulbs to be sequenced instead of just two, and these three sets are equally out of phase with each other in Figure below. Phase sequence: 1-2-3: bulbs appear to move left to right. If the lighting sequence is 1-2-3 (the sequence shown in (Figure above), the bulbs will appear to “move” from left to right. Now imagine this blinking string of bulbs arranged into a circle as in Figure below. Circular arrangement; bulbs appear to rotate clockwise. Now the lights in Figure above appear to be “moving” in a clockwise direction because they have arranged around a circle instead of a straight line. It should come as no surprise that the appearance of motion will reverse if the phase sequence of the bulbs is reversed. The blinking pattern will either appear to move clockwise or counter-clockwise depending on the phase sequence. This is analogous to a three-phase AC motor with three sets of windings energized by voltage sources of three different phase shifts in Figure below. Three-phase AC motor: A phase sequence of 1-2-3 spins the magnet clockwise, 3-2-1 spins the magnet counterclockwise. With phase shifts of less than 180o we get true rotation of the magnetic field. With single-phase motors, the rotating magnetic field necessary for self-starting must to be created by way of capacitive phase shift. With polyphase motors, the necessary phase shifts are there already. Plus, the direction of shaft rotation for polyphase motors is very easily reversed: just swap any two “hot” wires going to the motor, and it will run in the opposite direction! Review • AC “induction” and “synchronous” motors work by having a rotating magnet follow the alternating magnetic fields produced by stationary wire windings. • Single-phase AC motors of this type need help to get started spinning in a particular direction. • By introducing a phase shift of less than 180o to the magnetic fields in such a motor, a definite direction of shaft rotation can be established. • Single-phase induction motors often use an auxiliary winding connected in series with a capacitor to create the necessary phase shift. • Polyphase motors don’t need such measures; their direction of rotation is fixed by the phase sequence of the voltage they’re powered by. • Swapping any two “hot” wires on a polyphase AC motor will reverse its phase sequence, thus reversing its shaft rotation.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/07%3A_Polyphase_AC_Circuits/7.04%3A_Polyphase_Motor_Design.txt
Initially we explored the idea of three-phase power systems by connecting three voltage sources together in what is commonly known as the “Y” (or “star”) configuration. This configuration of voltage sources is characterized by a common connection point joining one side of each source. (Figure below) Three-phase “Y” connection has three voltage sources connected to a common point. If we draw a circuit showing each voltage source to be a coil of wire (alternator or transformer winding) and do some slight rearranging, the “Y” configuration becomes more obvious in Figure below. Three-phase, four-wire “Y” connection uses a “common” fourth wire. The three conductors leading away from the voltage sources (windings) toward a load are typically called lines, while the windings themselves are typically called phases. In a Y-connected system, there may or may not (Figure below) be a neutral wire attached at the junction point in the middle, although it certainly helps alleviate potential problems should one element of a three-phase load fail open, as discussed earlier. Three-phase, three-wire “Y” connection does not use the neutral wire. When we measure voltage and current in three-phase systems, we need to be specific as to where we’re measuring. Line voltage refers to the amount of voltage measured between any two line conductors in a balanced three-phase system. With the above circuit, the line voltage is roughly 208 volts. Phase voltage refers to the voltage measured across any one component (source winding or load impedance) in a balanced three-phase source or load. For the circuit shown above, the phase voltage is 120 volts. The terms line current and phase current follow the same logic: the former referring to current through any one line conductor, and the latter to current through any one component. Y-connected sources and loads always have line voltages greater than phase voltages, and line currents equal to phase currents. If the Y-connected source or load is balanced, the line voltage will be equal to the phase voltage times the square root of 3: However, the “Y” configuration is not the only valid one for connecting three-phase voltage source or load elements together. Another configuration is known as the “Delta,” for its geometric resemblance to the Greek letter of the same name (Δ). Take close notice of the polarity for each winding in Figure below. Three-phase, three-wire Δ connection has no common. At first glance it seems as though three voltage sources like this would create a short-circuit, electrons flowing around the triangle with nothing but the internal impedance of the windings to hold them back. Due to the phase angles of these three voltage sources, however, this is not the case. One quick check of this is to use Kirchhoff’s Voltage Law to see if the three voltages around the loop add up to zero. If they do, then there will be no voltage available to push current around and around that loop, and consequently, there will be no circulating current. Starting with the top winding and progressing counter-clockwise, our KVL expression looks something like this: Indeed, if we add these three vector quantities together, they do add up to zero. Another way to verify the fact that these three voltage sources can be connected together in a loop without resulting in circulating currents is to open up the loop at one junction point and calculate voltage across the break: (Figure below) Voltage across open Δ should be zero. Starting with the right winding (120 V ∠ 120o) and progressing counter-clockwise, our KVL equation looks like this: Sure enough, there will be zero voltage across the break, telling us that no current will circulate within the triangular loop of windings when that connection is made complete. Having established that a Δ-connected three-phase voltage source will not burn itself to a crisp due to circulating currents, we turn to its practical use as a source of power in three-phase circuits. Because each pair of line conductors is connected directly across a single winding in a Δ circuit, the line voltage will be equal to the phase voltage. Conversely, because each line conductor attaches at a node between two windings, the line current will be the vector sum of the two joining phase currents. Not surprisingly, the resulting equations for a Δ configuration are as follows: Let’s see how this works in an example circuit: (Figure below) The load on the Δ source is wired in a Δ. With each load resistance receiving 120 volts from its respective phase winding at the source, the current in each phase of this circuit will be 83.33 amps: So each line current in this three-phase power system is equal to 144.34 amps, which is substantially more than the line currents in the Y-connected system we looked at earlier. One might wonder if we’ve lost all the advantages of three-phase power here, given the fact that we have such greater conductor currents, necessitating thicker, more costly wire. The answer is no. Although this circuit would require three number 1 gage copper conductors (at 1000 feet of distance between source and load this equates to a little over 750 pounds of copper for the whole system), it is still less than the 1000+ pounds of copper required for a single-phase system delivering the same power (30 kW) at the same voltage (120 volts conductor-to-conductor). One distinct advantage of a Δ-connected system is its lack of a neutral wire. With a Y-connected system, a neutral wire was needed in case one of the phase loads were to fail open (or be turned off), in order to keep the phase voltages at the load from changing. This is not necessary (or even possible!) in a Δ-connected circuit. With each load phase element directly connected across a respective source phase winding, the phase voltage will be constant regardless of open failures in the load elements. Perhaps the greatest advantage of the Δ-connected source is its fault tolerance. It is possible for one of the windings in a Δ-connected three-phase source to fail open (Figure below) without affecting load voltage or current! Even with a source winding failure, the line voltage is still 120 V, and load phase voltage is still 120 V. The only difference is extra current in the remaining functional source windings. The only consequence of a source winding failing open for a Δ-connected source is increased phase current in the remaining windings. Compare this fault tolerance with a Y-connected system suffering an open source winding in Figure below. Open “Y” source winding halves the voltage on two loads of a Δ connected load. With a Δ-connected load, two of the resistances suffer reduced voltage while one remains at the original line voltage, 208. A Y-connected load suffers an even worse fate (Figure below) with the same winding failure in a Y-connected source Open source winding of a “Y-Y” system halves the voltage on two loads, and looses one load entirely. In this case, two load resistances suffer reduced voltage while the third loses supply voltage completely! For this reason, Δ-connected sources are preferred for reliability. However, if dual voltages are needed (e.g. 120/208) or preferred for lower line currents, Y-connected systems are the configuration of choice.​​​​​​​ Review • The conductors connected to the three points of a three-phase source or load are called lines. • The three components comprising a three-phase source or load are called phases. • Line voltage is the voltage measured between any two lines in a three-phase circuit. • Phase voltage is the voltage measured across a single component in a three-phase source or load. • Line current is the current through any one line between a three-phase source and load. • Phase current is the current through any one component comprising a three-phase source or load. • In balanced “Y” circuits, line voltage is equal to phase voltage times the square root of 3, while line current is equal to phase current. • In balanced Δ circuits, line voltage is equal to phase voltage, while line current is equal to phase current times the square root of 3. ​​​​​​​ • Δ-connected three-phase voltage sources give greater reliability in the event of winding failure than Y-connected sources. However, Y-connected sources can deliver the same amount of power with less line current than Δ-connected sources.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/07%3A_Polyphase_AC_Circuits/7.05%3A_Three-phase_Y_and_Delta_Configurations.txt
Since three-phase is used so often for power distribution systems, it makes sense that we would need three-phase transformers to be able to step voltages up or down. This is only partially true, as regular single-phase transformers can be ganged together to transform power between two three-phase systems in a variety of configurations, eliminating the requirement for a special three-phase transformer. However, special three-phase transformers are built for those tasks and are able to perform with less material requirement, less size, and less weight than their modular counterparts. Three-Phase Transformer Windings and Connections A three-phase transformer is made of three sets of primary and secondary windings, each set wound around one leg of an iron core assembly. Essentially it looks like three single-phase transformers sharing a joined core as in Figure below. Three phase transformer core has three sets of windings. Those sets of primary and secondary windings will be connected in either Δ or Y configurations to form a complete unit. The various combinations of ways that these windings can be connected together in will be the focus of this section. Whether the winding sets share a common core assembly or each winding pair is a separate transformer, the winding connection options are the same: • Primary - Secondary • Y - Y • Y - Δ • Δ - Y • Δ - Δ The reasons for choosing a Y or Δ configuration for transformer winding connections are the same as for any other three-phase application: Y connections provide the opportunity for multiple voltages, while Δ connections enjoy a higher level of reliability (if one winding fails open, the other two can still maintain full line voltages to the load). Probably the most important aspect of connecting three sets of primary and secondary windings together to form a three-phase transformer bank is paying attention to proper winding phasing (the dots used to denote “polarity” of windings). Remember the proper phase relationships between the phase windings of Δ and Y: (Figure below) (Y) The center point of the “Y” must tie either all the “-” or all the “+” winding points together. (Δ) The winding polarities must stack together in a complementary manner ( + to -). Getting this phasing correct when the windings aren’t shown in regular Y or Δ configuration can be tricky. Let me illustrate, starting with Figure below. Inputs A1, A2, A3 may be wired either “Δ” or “Y”, as may outputs B1, B2, B3. Phase Wiring for “Y-Y” Transformer Three individual transformers are to be connected together to transform power from one three-phase system to another. First, I’ll show the wiring connections for a Y-Y configuration: Figure below Phase wiring for “Y-Y” transformer. Note in Figure above how all the winding ends marked with dots are connected to their respective phases A, B, and C, while the non-dot ends are connected together to form the centers of each “Y”. Having both primary and secondary winding sets connected in “Y” formations allows for the use of neutral conductors (N1 and N2) in each power system. Phase Wiring for “Y-Δ” Transformer Now, we’ll take a look at a Y-Δ configuration: (Figure below) Phase wiring for “Y-Δ” transformer. Note how the secondary windings (bottom set, Figure above) are connected in a chain, the “dot” side of one winding connected to the “non-dot” side of the next, forming the Δ loop. At every connection point between pairs of windings, a connection is made to a line of the second power system (A, B, and C). Phase Wiring for “Δ-Y” Transformer Now, let’s examine a Δ-Y system in Figure below. Phase wiring for “Δ-Y” transformer. Such a configuration (Figure above) would allow for the provision of multiple voltages (line-to-line or line-to-neutral) in the second power system, from a source power system having no neutral. Phase Wiring for “Δ-Δ” Transformer And finally, we turn to the Δ-Δ configuration: (Figure below) Phase wiring for “Δ-Δ” transformer. When there is no need for a neutral conductor in the secondary power system, Δ-Δ connection schemes (Figure above) are preferred because of the inherent reliability of the Δ configuration. Phase Wiring for “V” or “open-Δ” Transformer Considering that a Δ configuration can operate satisfactorily missing one winding, some power system designers choose to create a three-phase transformer bank with only two transformers, representing a Δ-Δ configuration with a missing winding in both the primary and secondary sides: (Figure below) “V” or “open-Δ” provides 2-φ power with only two transformers. This configuration is called “V” or “Open-Δ.” Of course, each of the two transformers has to be oversized to handle the same amount of power as three in a standard Δ configuration, but the overall size, weight, and cost advantages are often worth it. Bear in mind, however, that with one winding set missing from the Δ shape, this system no longer provides the fault tolerance of a normal Δ-Δ system. If one of the two transformers were to fail, the load voltage and current would definitely be affected. The following photograph (Figure below) shows a bank of step-up transformers at the Grand Coulee hydroelectric dam in Washington state. Several transformers (green in color) may be seen from this vantage point, and they have grouped in threes: three transformers per hydroelectric generator, wired together in some form of three-phase configuration. The photograph doesn’t reveal the primary winding connections, but it appears the secondaries are connected in a Y configuration, is that there is only one large high-voltage insulator protruding from each transformer. This suggests the other side of each transformer’s secondary winding is at or near ground potential, which could only be true in a Y system. The building to the left is the powerhouse, where the generators and turbines are housed. On the right, the sloping concrete wall is the downstream face of the dam:
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/07%3A_Polyphase_AC_Circuits/7.06%3A_Three-phase_Transformer_Circuits.txt
In the chapter on mixed-frequency signals, we explored the concept of harmonics in AC systems: frequencies that are integer multiples of the fundamental source frequency. With AC power systems where the source voltage waveform coming from an AC generator (alternator) is supposed to be a single-frequency sine wave, undistorted, there should be no harmonic content . . . ideally. This would be true were it not for nonlinear components. Nonlinear components draw current disproportionately with respect to the source voltage, causing non-sinusoidal current waveforms. Examples of nonlinear components include gas-discharge lamps, semiconductor power-control devices (diodes, transistors, SCRs, TRIACs), Transformers (primary winding magnetization current is usually non-sinusoidal due to the B/H saturation curve of the core), and electric motors (again, when magnetic fields within the motor’s core operate near saturation levels). Even incandescent lamps generate slightly nonsinusoidal currents, as the filament resistance changes throughout the cycle due to rapid fluctuations in temperature. As we learned in the mixed-frequency chapter, any distortion of an otherwise sine-wave shaped waveform constitutes the presence of harmonic frequencies. When the nonsinusoidal waveform in question is symmetrical above and below its average centerline, the harmonic frequencies will be odd integer multiples of the fundamental source frequency only, with no even integer multiples. (Figure below) Most nonlinear loads produce current waveforms like this, and so even-numbered harmonics (2nd, 4th, 6th, 8th, 10th, 12th, etc.) are absent or only minimally present in most AC power systems. Examples of symmetrical waveforms—odd harmonics only. Examples of nonsymmetrical waveforms with even harmonics present are shown for reference in Figure below. Examples of nonsymmetrical waveforms—even harmonics present. Even though half of the possible harmonic frequencies are eliminated by the typically symmetrical distortion of nonlinear loads, the odd harmonics can still cause problems. Some of these problems are general to all power systems, single-phase or otherwise. Transformer overheating due to eddy current losses, for example, can occur in any AC power system where there is significant harmonic content. However, there are some problems caused by harmonic currents that are specific to polyphase power systems, and it is these problems to which this section is specifically devoted. It is helpful to be able to simulate nonlinear loads in SPICE so as to avoid a lot of complex mathematics and obtain a more intuitive understanding of harmonic effects. First, we’ll begin our simulation with a very simple AC circuit: a single sine-wave voltage source with a purely linear load and all associated resistances: (Figure below) SPICE circuit with single sine-wave source. The Rsource and Rline resistances in this circuit do more than just mimic the real world: they also provide convenient shunt resistances for measuring currents in the SPICE simulation: by reading voltage across a 1 Ω resistance, you obtain a direct indication of current through it, since E = IR. A SPICE simulation of this circuit (SPICE listing: “linear load simulation”) with Fourier analysis on the voltage measured across Rline should show us the harmonic content of this circuit’s line current. Being completely linear in nature, we should expect no harmonics other than the 1st (fundamental) of 60 Hz, assuming a 60 Hz source. See SPICE output “Fourier components of transient response v(2,3)” and Figure below. Frequency domain plot of single frequency component. See SPICE listing: “linear load simulation”. A .plot command appears in the SPICE netlist, and normally this would result in a sine-wave graph output. In this case, however, I’ve purposely omitted the waveform display for brevity’s sake—the .plot command is in the netlist simply to satisfy a quirk of SPICE’s Fourier transform function. No discrete Fourier transform is perfect, and so we see very small harmonic currents indicated (in the pico-amp range!) for all frequencies up to the 9th harmonic (in the table ), which is as far as SPICE goes in performing Fourier analysis. We show 0.1198 amps (1.198E-01) for the “Fourier component” of the 1st harmonic, or the fundamental frequency, which is our expected load current: about 120 mA, given a source voltage of 120 volts and a load resistance of 1 kΩ. Next, I’d like to simulate a nonlinear load so as to generate harmonic currents. This can be done in two fundamentally different ways. One way is to design a load using nonlinear components such as diodes or other semiconductor devices which are easy to simulate with SPICE. Another is to add some AC current sources in parallel with the load resistor. The latter method is often preferred by engineers for simulating harmonics since current sources of known value lend themselves better to mathematical network analysis than components with highly complex response characteristics. Since we’re letting SPICE do all the math work, the complexity of a semiconductor component would cause no trouble for us, but since current sources can be fine-tuned to produce any arbitrary amount of current (a convenient feature), I’ll choose the latter approach shown in Figure below and SPICE listing: “Nonlinear load simulation”. SPICE circuit: 60 Hz source with 3rd harmonic added. In this circuit, we have a current source of 50 mA magnitude and a frequency of 180 Hz, which is three times the source frequency of 60 Hz. Connected in parallel with the 1 kΩ load resistor, its current will add with the resistor’s to make a nonsinusoidal total line current. I’ll show the waveform plot in Figure below just so you can see the effects of this 3rd-harmonic current on the total current, which would ordinarily be a plain sine wave. SPICE time-domain plot showing sum of 60 Hz source and 3rd harmonic of 180 Hz. SPICE Fourier plot showing 60 Hz source and 3rd harmonic of 180 Hz. In the Fourier analysis, (See Figure above and “Fourier components of transient response v(2,3)”) the mixed frequencies are unmixed and presented separately. Here we see the same 0.1198 amps of 60 Hz (fundamental) current as we did in the first simulation, but appearing in the 3rd harmonic row we see 49.9 mA: our 50 mA, 180 Hz current source at work. Why don’t we see the entire 50 mA through the line? Because that current source is connected across the 1 kΩ load resistor, so some of its current is shunted through the load and never goes through the line back to the source. It’s an inevitable consequence of this type of simulation, where one part of the load is “normal” (a resistor) and the other part is imitated by a current source. If we were to add more current sources to the “load,” we would see further distortion of the line current waveform from the ideal sine-wave shape, and each of those harmonic currents would appear in the Fourier analysis breakdown. See Figure below and SPICE listing: “Nonlinear load simulation”. Nonlinear load: 1st, 3rd, 5th, 7th, and 9th harmonics present. Fourier analysis: “Fourier components of transient response v(2,3)”. As you can see from the Fourier analysis, (Figure above) every harmonic current source is equally represented in the line current, at 49.9 mA each. So far, this is just a single-phase power system simulation. Things get more interesting when we make it a three-phase simulation. Two Fourier analyses will be performed: one for the voltage across a line resistor, and one for the voltage across the neutral resistor. As before, reading voltages across fixed resistances of 1 Ω each gives direct indications of current through those resistors. See Figure below and SPICE listing “Y-Y source/load 4-wire system with harmonics”. SPICE circuit: analysis of “line current” and “neutral current”, Y-Y source/load 4-wire system with harmonics. Fourier analysis of line current: Fourier analysis of line current in balanced Y-Y system Fourier analysis of neutral current: Fourier analysis of neutral current shows other than no harmonics! Compare to line current in Figure above This is a balanced Y-Y power system, each phase identical to the single-phase AC system simulated earlier. Consequently, it should come as no surprise that the Fourier analysis for line current in one phase of the 3-phase system is nearly identical to the Fourier analysis for line current in the single-phase system: a fundamental (60 Hz) line current of 0.1198 amps, and odd harmonic currents of approximately 50 mA each. See Figure above and Fourier analysis: “Fourier components of transient response v(2,8)” What should be surprising here is the analysis for the neutral conductor’s current, as determined by the voltage drop across the Rneutral resistor between SPICE nodes 0 and 7. (Figure above) In a balanced 3-phase Y load, we would expect the neutral current to be zero. Each phase current—which by itself would go through the neutral wire back to the supplying phase on the source Y—should cancel each other in regard to the neutral conductor because they’re all the same magnitude and all shifted 120o apart. In a system with no harmonic currents, this is what happens, leaving zero current through the neutral conductor. However, we cannot say the same for harmonic currents in the same system. Note that the fundamental frequency (60 Hz, or the 1st harmonic) current is virtually absent from the neutral conductor. Our Fourier analysis shows only 0.4337 µA of 1st harmonic when reading voltage across Rneutral. The same may be said about the 5th and 7th harmonics, both of those currents having negligible magnitude. In contrast, the 3rd and 9th harmonics are strongly represented within the neutral conductor, with 149.3 mA (1.493E-01 volts across 1 Ω) each! This is very nearly 150 mA, or three times the current sources’ values, individually. With three sources per harmonic frequency in the load, it appears our 3rd and 9th harmonic currents in each phase are adding to form the neutral current. See Fourier analysis: “Fourier components of transient response v(0,7) ” This is exactly what’s happening, though it might not be apparent why this is so. The key to understanding this is made clear in a time-domain graph of phase currents. Examine this plot of balanced phase currents over time, with a phase sequence of 1-2-3. (Figure below) Phase sequence 1-2-3-1-2-3-1-2-3 of equally spaced waves. With the three fundamental waveforms equally shifted across the time axis of the graph, it is easy to see how they would cancel each other to give a resultant current of zero in the neutral conductor. Let’s consider, though, what a 3rd harmonic waveform for phase 1 would look like superimposed on the graph in Figure below. Third harmonic waveform for phase-1 superimposed on three-phase fundamental waveforms. Observe how this harmonic waveform has the same phase relationship to the 2nd and 3rd fundamental waveforms as it does with the 1st: in each positive half-cycle of any of the fundamental waveforms, you will find exactly two positive half-cycles and one negative half-cycle of the harmonic waveform. What this means is that the 3rd-harmonic waveforms of three 120o phase-shifted fundamental-frequency waveforms are actually in phase with each other. The phase shift figure of 120o generally assumed in three-phase AC systems applies only to the fundamental frequencies, not to their harmonic multiples! If we were to plot all three 3rd-harmonic waveforms on the same graph, we would see them precisely overlap and appear as a single, unified waveform (shown in bold in (Figure below) Third harmonics for phases 1, 2, 3 all coincide when superimposed on the fundamental three-phase waveforms. For the more mathematically inclined, this principle may be expressed symbolically. Suppose that A represents one waveform and B another, both at the same frequency, but shifted 120o from each other in terms of phase. Let’s call the 3rd harmonic of each waveform A’ and B’, respectively. The phase shift between A’ and B’ is not 120o (that is the phase shift between A and B), but 3 times that, because the A’ and B’ waveforms alternate three times as fast as A and B. The shift between waveforms is only accurately expressed in terms of phase angle when the same angular velocity is assumed. When relating waveforms of different frequency, the most accurate way to represent phase shift is in terms of time; and the time-shift between A’ and B’ is equivalent to 120o at a frequency three times lower, or 360o at the frequency of A’ and B’. A phase shift of 360o is the same as a phase shift of 0o, which is to say no phase shift at all. Thus, A’ and B’ must be in phase with each other: This characteristic of the 3rd harmonic in a three-phase system also holds true for any integer multiples of the 3rd harmonic. So, not only are the 3rd harmonic waveforms of each fundamental waveform in phase with each other, but so are the 6th harmonics, the 9th harmonics, the 12th harmonics, the 15th harmonics, the 18th harmonics, the 21st harmonics, and so on. Since only odd harmonics appear in systems where waveform distortion is symmetrical about the centerline—and most nonlinear loads create symmetrical distortion—even-numbered multiples of the 3rd harmonic (6th, 12th, 18th, etc.) are generally not significant, leaving only the odd-numbered multiples (3rd, 9th, 15th, 21st, etc.) to significantly contribute to neutral currents. In polyphase power systems with some number of phases other than three, this effect occurs with harmonics of the same multiple. For instance, the harmonic currents that add in the neutral conductor of a star-connected 4-phase system where the phase shift between fundamental waveforms is 90o would be the 4th, 8th, 12th, 16th, 20th, and so on. Due to their abundance and significance in three-phase power systems, the 3rd harmonic and its multiples have their own special name: triplen harmonics. All triplen harmonics add with each other in the neutral conductor of a 4-wire Y-connected load. In power systems containing substantial nonlinear loading, the triplen harmonic currents may be of great enough magnitude to cause neutral conductors to overheat. This is very problematic, as other safety concerns prohibit neutral conductors from having overcurrent protection, and thus there is no provision for automatic interruption of these high currents. The following illustration shows how triplen harmonic currents created at the load add within the neutral conductor. The symbol “ω” is used to represent angular velocity and is mathematically equivalent to 2πf. So, “ω” represents the fundamental frequency, “3ω ” represents the 3rd harmonic, “5ω” represents the 5th harmonic, and so on: (Figure below) “Y-Y”Triplen source/load: Harmonic currents add in neutral conductor. In an effort to mitigate these additive triplen currents, one might be tempted to remove the neutral wire entirely. If there is no neutral wire in which triplen currents can flow together, then they won’t, right? Unfortunately, doing so just causes a different problem: the load’s “Y” center-point will no longer be at the same potential as the source’s, meaning that each phase of the load will receive a different voltage than what is produced by the source. We’ll re-run the last SPICE simulation without the 1 Ω Rneutral resistor and see what happens: Fourier analysis of line current: Fourier analysis of voltage between the two “Y” center-points: Fourier analysis of load phase voltage: Strange things are happening, indeed. First, we see that the triplen harmonic currents (3rd and 9th) all but disappear in the lines connecting load to source. The 5th and 7th harmonic currents are present at their normal levels (approximately 50 mA), but the 3rd and 9th harmonic currents are of negligible magnitude. Second, we see that there is substantial harmonic voltage between the two “Y” center-points, between which the neutral conductor used to connect. According to SPICE, there is 50 volts of both 3rd and 9th harmonic frequency between these two points, which is definitely not normal in a linear (no harmonics), balanced Y system. Finally, the voltage as measured across one of the load’s phases (between nodes 8 and 7 in the SPICE analysis) likewise shows strong triplen harmonic voltages of 50 volts each. Figure below is a graphical summary of the aforementioned effects. Three-wire “Y-Y” (no neutral) system: Triplen voltages appear between “Y” centers. Triplen voltages appear across load phases. Non-triplen currents appear in line conductors. In summary, removal of the neutral conductor leads to a “hot” center-point on the load “Y”, and also to harmonic load phase voltages of equal magnitude, all comprised of triplen frequencies. In the previous simulation where we had a 4-wire, Y-connected system, the undesirable effect from harmonics was excessive neutral current, but at least each phase of the load received voltage nearly free of harmonics. Since removing the neutral wire didn’t seem to work in eliminating the problems caused by harmonics, perhaps switching to a Δ configuration will. Let’s try a Δ source instead of a Y, keeping the load in its present Y configuration, and see what happens. The measured parameters will be line current (voltage across Rline, nodes 0 and 8), load phase voltage (nodes 8 and 7), and source phase current (voltage across Rsource, nodes 1 and 2). (Figure below) Delta-Y source/load with harmonics Note: the following paragraph is for those curious readers who follow every detail of my SPICE netlists. If you just want to find out what happens in the circuit, skip this paragraph! When simulating circuits having AC sources of differing frequency and differing phase, the only way to do it in SPICE is to set up the sources with a delay time or phase offset specified in seconds. Thus, the 0o source has these five specifying figures: “(0 207.846 60 0 0)”, which means 0 volts DC offset, 207.846 volts peak amplitude (120 times the square root of three, to ensure the load phase voltages remain at 120 volts each), 60 Hz, 0 time delay, and 0 damping factor. The 120o phase-shifted source has these figures: “(0 207.846 60 5.55555m 0)”, all the same as the first except for the time delay factor of 5.55555 milliseconds, or 1/3 of the full period of 16.6667 milliseconds for a 60 Hz waveform. The 240o source must be time-delayed twice that amount, equivalent to a fraction of 240/360 of 16.6667 milliseconds, or 11.1111 milliseconds. This is for the Δ-connected source. The Y-connected load, on the other hand, requires a different set of time-delay figures for its harmonic current sources, because the phase voltages in a Y load are not in phase with the phase voltages of a Δ source. If Δ source voltages VAC, VBA, and VCB are referenced at 0o, 120o, and 240o, respectively, then “Y” load voltages VA, VB, and VC will have phase angles of -30o, 90o, and 210o, respectively. This is an intrinsic property of all Δ-Y circuits and not a quirk of SPICE. Therefore, when I specified the delay times for the harmonic sources, I had to set them at 15.2777 milliseconds (-30o, or +330o), 4.16666 milliseconds (90o), and 9.72222 milliseconds (210o). One final note: when delaying AC sources in SPICE, they don’t “turn on” until their delay time has elapsed, which means any mathematical analysis up to that point in time will be in error. Consequently, I set the .tran transient analysis line to hold off analysis until 16 milliseconds after start, which gives all sources in the netlist time to engage before any analysis takes place. The result of this analysis is almost as disappointing as the last. (Figure below) Line currents remain unchanged (the only substantial harmonic content being the 5th and 7th harmonics), and load phase voltages remain unchanged as well, with a full 50 volts of triplen harmonic (3rd and 9th) frequencies across each load component. Source phase current is a fraction of the line current, which should come as no surprise. Both 5th and 7th harmonics are represented there, with negligible triplen harmonics: Fourier analysis of line current: Fourier analysis of load phase voltage: Fourier analysis of source phase current: “Δ-Y” source/load: Triplen voltages appear across load phases. Non-triplen currents appear in line conductors and in source phase windings. Really, the only advantage of the Δ-Y configuration from the standpoint of harmonics is that there is no longer a center-point at the load posing a shock hazard. Otherwise, the load components receive the same harmonically-rich voltages and the lines see the same currents as in a three-wire Y system. If we were to reconfigure the system into a Δ-Δ arrangement, (Figure below) that should guarantee that each load component receives non-harmonic voltage, since each load phase would be directly connected in parallel with each source phase. The complete lack of any neutral wires or “center points” in a Δ-Δ system prevents strange voltages or additive currents from occurring. It would seem to be the ideal solution. Let’s simulate and observe, analyzing line current, load phase voltage, and source phase current. See SPICE listing: “Delta-Delta source/load with harmonics”, “Fourier analysis: Fourier components of transient response v(0,6)”, and “Fourier components of transient response v(2,1)”. Delta-Delta source/load with harmonics. Fourier analysis of line current: Fourier analysis of load phase voltage: Fourier analysis of source phase current: As predicted earlier, the load phase voltage is almost a pure sine-wave, with negligible harmonic content, thanks to the direct connection with the source phases in a Δ-Δ system. But what happened to the triplen harmonics? The 3rd and 9th harmonic frequencies don’t appear in any substantial amount in the line current, nor in the load phase voltage, nor in the source phase current! We know that triplen currents exist, because the 3rd and 9th harmonic current sources are intentionally placed in the phases of the load, but where did those currents go? Remember that the triplen harmonics of 120o phase-shifted fundamental frequencies are in phase with each other. Note the directions that the arrows of the current sources within the load phases are pointing, and think about what would happen if the 3rd and 9th harmonic sources were DC sources instead. What we would have is current circulating within the loop formed by the Δ-connected phases. This is where the triplen harmonic currents have gone: they stay within the Δ of the load, never reaching the line conductors or the windings of the source. These results may be graphically summarized as such in Figure below. Δ-Δ source/load: Load phases receive undistorted sinewave voltages. Triplen currents are confined to circulate within load phases. Non-triplen currents apprear in line conductors and in source phase windings. This is a major benefit of the Δ-Δ system configuration: triplen harmonic currents remain confined in whatever set of components create them, and do not “spread” to other parts of the system. Review • Nonlinear components are those that draw a non-sinusoidal (non-sine-wave) current waveform when energized by a sinusoidal (sine-wave) voltage. Since any distortion of an originally pure sine-wave constitutes harmonic frequencies, we can say that nonlinear components generate harmonic currents. • When the sine-wave distortion is symmetrical above and below the average centerline of the waveform, the only harmonics present will be odd-numbered, not even-numbered. • The 3rd harmonic, and integer multiples of it (6th, 9th, 12th, 15th) are known as triplen harmonics. They are in phase with each other, despite the fact that their respective fundamental waveforms are 120o out of phase with each other. • In a 4-wire Y-Y system, triplen harmonic currents add within the neutral conductor. • Triplen harmonic currents in a Δ-connected set of components circulate within the loop formed by the Δ. 7.08: Harmonic Phase Sequences In the last section, we saw how the 3rd harmonic and all of its integer multiples (collectively called triplen harmonics) generated by 120o phase-shifted fundamental waveforms are actually in phase with each other. In a 60 Hz three-phase power system, where phases A, B, and C are 120o apart, the third-harmonic multiples of those frequencies (180 Hz) fall perfectly into phase with each other. This can be thought of in graphical terms, (Figure below) and/or in mathematical terms: Harmonic currents of Phases A, B, C all coincide, that is, no rotation. If we extend the mathematical table to include higher odd-numbered harmonics, we will notice an interesting pattern develop with regard to the rotation or sequence of the harmonic frequencies: Harmonics such as the 7th, which “rotate” with the same sequence as the fundamental, are called positive sequence. Harmonics such as the 5th, which “rotate” in the opposite sequence as the fundamental, are called negative sequence. Triplen harmonics (3rd and 9th shown in this table) which don’t “rotate” at all because they’re in phase with each other, are called zero sequence. This pattern of positive-zero-negative-positive continues indefinitely for all odd-numbered harmonics, lending itself to expression in a table like this:
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/07%3A_Polyphase_AC_Circuits/7.07%3A_Harmonics_in_Polyphase_Power_Systems.txt
• 8.1: Introduction to Mixed-Frequency AC Signals In our study of AC circuits thus far, we’ve explored circuits powered by a single-frequency sine voltage waveform. In many applications of electronics, though, single-frequency signals are the exception rather than the rule. Quite often we may encounter circuits where multiple frequencies of voltage coexist simultaneously. Also, circuit waveforms may be something other than sine-wave shaped, in which case we call them non-sinusoidal waveforms. • 8.2: Square Wave Signals It has been found that any repeating, non-sinusoidal waveform can be equated to a combination of DC voltage, sine waves, and/or cosine waves (sine waves with a 90 degree phase shift) at various amplitudes and frequencies. This is true no matter how strange or convoluted the waveform in question may be. So long as it repeats itself regularly over time, it is reducible to this series of sinusoidal waves. • 8.3: Other Waveshapes As strange as it may seem, any repeating, non-sinusoidal waveform is actually equivalent to a series of sinusoidal waveforms of different amplitudes and frequencies added together. Square waves are a very common and well-understood case, but not the only one. • 8.4: More on Spectrum Analysis Computerized Fourier analysis, particularly in the form of the FFT algorithm, is a powerful tool for furthering our understanding of waveforms and their related spectral components. This same mathematical routine programmed into the SPICE simulator as the .fourier option is also programmed into a variety of electronic test instruments to perform real-time Fourier analysis on measured signals. • 8.5: Circuit Effects The principle of non-sinusoidal, repeating waveforms being equivalent to a series of sine waves at different frequencies is a fundamental property of waves in general and it has great practical import in the study of AC circuits. It means that any time we have a waveform that isn’t perfectly sine-wave-shaped, the circuit in question will react as though its having an array of different frequency voltages imposed on it at once. 08: Mixed-Frequency AC Signals In our study of AC circuits thus far, we’ve explored circuits powered by a single-frequency sine voltage waveform. In many applications of electronics, though, single-frequency signals are the exception rather than the rule. Quite often we may encounter circuits where multiple frequencies of voltage coexist simultaneously. Also, circuit waveforms may be something other than sine-wave shaped, in which case we call them non-sinusoidal waveforms. Additionally, we may encounter situations where DC is mixed with AC: where a waveform is superimposed on a steady (DC) signal. The result of such a mix is a signal varying in intensity, but never changing polarity, or changing polarity asymmetrically (spending more time positive than negative, for example). Since DC does not alternate as AC does, its “frequency” is said to be zero, and any signal containing DC along with a signal of varying intensity (AC) may be rightly called a mixed-frequency signal as well. In any of these cases where there is a mix of frequencies in the same circuit, analysis is more complex than what we’ve seen up to this point. Sometimes mixed-frequency voltage and current signals are created accidentally. This may be the result of unintended connections between circuits—called coupling—made possible by stray capacitance and/or inductance between the conductors of those circuits. A classic example of coupling phenomenon is seen frequently in industry where DC signal wiring is placed in close proximity to AC power wiring. The nearby presence of high AC voltages and currents may cause “foreign” voltages to be impressed upon the length of the signal wiring. Stray capacitance formed by the electrical insulation separating power conductors from signal conductors may cause voltage (with respect to earth ground) from the power conductors to be impressed upon the signal conductors, while stray inductance formed by parallel runs of wire in conduit may cause current from the power conductors to electromagnetically induce voltage along the signal conductors. The result is a mix of DC and AC at the signal load. The following schematic shows how an AC “noise” source may “couple” to a DC circuit through mutual inductance (Mstray) and capacitance (Cstray) along the length of the conductors. (Figure below) Stray inductance and capacitance couple stray AC into desired DC signal. When stray AC voltages from a “noise” source mix with DC signals conducted along signal wiring, the results are usually undesirable. For this reason, power wiring and low-level signal wiring should always be routed through separated, dedicated metal conduit, and signals should be conducted via 2-conductor “twisted pair” cable rather than through a single wire and ground connection: (Figure below) Shielded twisted pair minimized noise. The grounded cable shield—a wire braid or metal foil wrapped around the two insulated conductors—isolates both conductors from electrostatic (capacitive) coupling by blocking any external electric fields, while the parallel proximity of the two conductors effectively cancels any electromagnetic (mutually inductive) coupling because any induced noise voltage will be approximately equal in magnitude and opposite in phase along both conductors, canceling each other at the receiving end for a net (differential) noise voltage of almost zero. Polarity marks placed near each inductive portion of signal conductor length shows how the induced voltages are phased in such a way as to cancel one another. Coupling may also occur between two sets of conductors carrying AC signals, in which case both signals may become “mixed” with each other: (Figure below) Coupling of AC signals between parallel conductors. Coupling is but one example of how signals of different frequencies may become mixed. Whether it be AC mixed with DC, or two AC signals mixing with each other, signal coupling via stray inductance and capacitance is usually accidental and undesired. In other cases, mixed-frequency signals are the result of intentional design or they may be an intrinsic quality of a signal. It is generally quite easy to create mixed-frequency signal sources. Perhaps the easiest way is to simply connect voltage sources in series: (Figure below) Series connection of voltage sources mixes signals. Some computer communications networks operate on the principle of superimposing high-frequency voltage signals along 60 Hz power-line conductors, so as to convey computer data along existing lengths of power cabling. This technique has been used for years in electric power distribution networks to communicate load data along high-voltage power lines. Certainly these are examples of mixed-frequency AC voltages, under conditions that are deliberately established. In some cases, mixed-frequency signals may be produced by a single voltage source. Such is the case with microphones, which convert audio-frequency air pressure waves into corresponding voltage waveforms. The particular mix of frequencies in the voltage signal output by the microphone is dependent on the sound being reproduced. If the sound waves consist of a single, pure note or tone, the voltage waveform will likewise be a sine wave at a single frequency. If the sound wave is a chord or other harmony of several notes, the resulting voltage waveform produced by the microphone will consist of those frequencies mixed together. Very few natural sounds consist of single, pure sine wave vibrations but rather are a mix of different frequency vibrations at different amplitudes. Musical chords are produced by blending one frequency with other frequencies of particular fractional multiples of the first. However, investigating a little further, we find that even a single piano note (produced by a plucked string) consists of one predominant frequency mixed with several other frequencies, each frequency a whole-number multiple of the first (called harmonics, while the first frequency is called the fundamental). An illustration of these terms is shown in Table below with a fundamental frequency of 1000 Hz (an arbitrary figure chosen for this example). For a “base” frequency of 1000 Hz: Sometimes the term “overtone” is used to describe the a harmonic frequency produced by a musical instrument. The “first” overtone is the first harmonic frequency greater than the fundamental. If we had an instrument producing the entire range of harmonic frequencies shown in the table above, the first overtone would be 2000 Hz (the 2nd harmonic), while the second overtone would be 3000 Hz (the 3rd harmonic), etc. However, this application of the term “overtone” is specific to particular instruments. It so happens that certain instruments are incapable of producing certain types of harmonic frequencies. For example, an instrument made from a tube that is open on one end and closed on the other (such as a bottle, which produces sound when air is blown across the opening) is incapable of producing even-numbered harmonics. Such an instrument set up to produce a fundamental frequency of 1000 Hz would also produce frequencies of 3000 Hz, 5000 Hz, 7000 Hz, etc, but would not produce 2000 Hz, 4000 Hz, 6000 Hz, or any other even-multiple frequencies of the fundamental. As such, we would say that the first overtone (the first frequency greater than the fundamental) in such an instrument would be 3000 Hz (the 3rd harmonic), while the second overtone would be 5000 Hz (the 5th harmonic), and so on. A pure sine wave (single frequency), being entirely devoid of any harmonics, sounds very “flat” and “featureless” to the human ear. Most musical instruments are incapable of producing sounds this simple. What gives each instrument its distinctive tone is the same phenomenon that gives each person a distinctive voice: the unique blending of harmonic waveforms with each fundamental note, described by the physics of motion for each unique object producing the sound. Brass instruments do not possess the same “harmonic content” as woodwind instruments, and neither produce the same harmonic content as stringed instruments. A distinctive blend of frequencies is what gives a musical instrument its characteristic tone. As anyone who has played guitar can tell you, steel strings have a different sound than nylon strings. Also, the tone produced by a guitar string changes depending on where along its length it is plucked. These differences in tone, as well, are a result of different harmonic content produced by differences in the mechanical vibrations of an instrument’s parts. All these instruments produce harmonic frequencies (whole-number multiples of the fundamental frequency) when a single note is played, but the relative amplitudes of those harmonic frequencies are different for different instruments. In musical terms, the measure of a tone’s harmonic content is called timbre or color. Musical tones become even more complex when the resonating element of an instrument is a two-dimensional surface rather than a one-dimensional string. Instruments based on the vibration of a string (guitar, piano, banjo, lute, dulcimer, etc.) or of a column of air in a tube (trumpet, flute, clarinet, tuba, pipe organ, etc.) tend to produce sounds composed of a single frequency (the “fundamental”) and a mix of harmonics. Instruments based on the vibration of a flat plate (steel drums, and some types of bells), however, produce a much broader range of frequencies, not limited to whole-number multiples of the fundamental. The result is a distinctive tone that some people find acoustically offensive. As you can see, music provides a rich field of study for mixed frequencies and their effects. Later sections of this chapter will refer to musical instruments as sources of waveforms for analysis in more detail. Review • A sinusoidal waveform is one shaped exactly like a sine wave. • A non-sinusoidal waveform can be anything from a distorted sine-wave shape to something completely different like a square wave. • Mixed-frequency waveforms can be accidently created, purposely created, or simply exist out of necessity. Most musical tones, for instance, are not composed of a single frequency sine-wave, but are rich blends of different frequencies. • When multiple sine waveforms are mixed together (as is often the case in music), the lowest frequency sine-wave is called the fundamental, and the other sine-waves whose frequencies are whole-number multiples of the fundamental wave are called harmonics. • An overtone is a harmonic produced by a particular device. The “first” overtone is the first frequency greater than the fundamental, while the “second” overtone is the next greater frequency produced. Successive overtones may or may not correspond to incremental harmonics, depending on the device producing the mixed frequencies. Some devices and systems do not permit the establishment of certain harmonics, and so their overtones would only include some (not all) harmonic frequencies.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/08%3A_Mixed-Frequency_AC_Signals/8.01%3A_Introduction_to_Mixed-Frequency_AC_Signals.txt
It has been found that any repeating, non-sinusoidal waveform can be equated to a combination of DC voltage, sine waves, and/or cosine waves (sine waves with a 90 degree phase shift) at various amplitudes and frequencies. This is true no matter how strange or convoluted the waveform in question may be. So long as it repeats itself regularly over time, it is reducible to this series of sinusoidal waves. In particular, it has been found that square waves are mathematically equivalent to the sum of a sine wave at that same frequency, plus an infinite series of odd-multiple frequency sine waves at diminishing amplitude: This truth about waveforms at first may seem too strange to believe. However, if a square wave is actually an infinite series of sine wave harmonics added together, it stands to reason that we should be able to prove this by adding together several sine wave harmonics to produce a close approximation of a square wave. This reasoning is not only sound, but easily demonstrated with SPICE. The circuit we’ll be simulating is nothing more than several sine wave AC voltage sources of the proper amplitudes and frequencies connected together in series. We’ll use SPICE to plot the voltage waveforms across successive additions of voltage sources, like this: (Figure below) A square wave is approximated by the sum of harmonics. In this particular SPICE simulation, I’ve summed the 1st, 3rd, 5th, 7th, and 9th harmonic voltage sources in series for a total of five AC voltage sources. The fundamental frequency is 50 Hz and each harmonic is, of course, an integer multiple of that frequency. The amplitude (voltage) figures are not random numbers; rather, they have been arrived at through the equations shown in the frequency series (the fraction 4/π multiplied by 1, 1/3, 1/5, 1/7, etc. for each of the increasing odd harmonics). I’ll narrate the analysis step by step from here, explaining what it is we’re looking at. In this first plot, we see the fundamental-frequency sine-wave of 50 Hz by itself. It is nothing but a pure sine shape, with no additional harmonic content. This is the kind of waveform produced by an ideal AC power source: (Figure below) Pure 50 Hz sinewave. Next, we see what happens when this clean and simple waveform is combined with the third harmonic (three times 50 Hz, or 150 Hz). Suddenly, it doesn’t look like a clean sine wave any more: (Figure below) Sum of 1st (50 Hz) and 3rd (150 Hz) harmonics approximates a 50 Hz square wave. The rise and fall times between positive and negative cycles are much steeper now, and the crests of the wave are closer to becoming flat like a squarewave. Watch what happens as we add the next odd harmonic frequency: (Figure below) Sum of 1st, 3rd and 5th harmonics approximates square wave. The most noticeable change here is how the crests of the wave have flattened even more. There are more several dips and crests at each end of the wave, but those dips and crests are smaller in amplitude than they were before. Watch again as we add the next odd harmonic waveform to the mix: (Figure below) Sum of 1st, 3rd, 5th, and 7th harmonics approximates square wave. Here we can see the wave becoming flatter at each peak. Finally, adding the 9th harmonic, the fifth sine wave voltage source in our circuit, we obtain this result: (Figure below) Sum of 1st, 3rd, 5th, 7th and 9th harmonics approximates square wave. The end result of adding the first five odd harmonic waveforms together (all at the proper amplitudes, of course) is a close approximation of a square wave. The point in doing this is to illustrate how we can build a square wave up from multiple sine waves at different frequencies, to prove that a pure square wave is actually equivalent to a series of sine waves. When a square wave AC voltage is applied to a circuit with reactive components (capacitors and inductors), those components react as if they were being exposed to several sine wave voltages of different frequencies, which in fact they are. The fact that repeating, non-sinusoidal waves are equivalent to a definite series of additive DC voltage, sine waves, and/or cosine waves is a consequence of how waves work: a fundamental property of all wave-related phenomena, electrical or otherwise. The mathematical process of reducing a non-sinusoidal wave into these constituent frequencies is called Fourier analysis, the details of which are well beyond the scope of this text. However, computer algorithms have been created to perform this analysis at high speeds on real waveforms, and its application in AC power quality and signal analysis is widespread. SPICE has the ability to sample a waveform and reduce it into its constituent sine wave harmonics by way of a Fourier Transform algorithm, outputting the frequency analysis as a table of numbers. Let’s try this on a square wave, which we already know is composed of odd-harmonic sine waves: The pulse option in the netlist line describing voltage source v1 instructs SPICE to simulate a square-shaped “pulse” waveform, in this case one that is symmetrical (equal time for each half-cycle) and has a peak amplitude of 1 volt. First we’ll plot the square wave to be analyzed: (Figure below) Squarewave for SPICE Fourier analysis Next, we’ll print the Fourier analysis generated by SPICE for this square wave: Plot of Fourier analysis results. Here, (Figure above) SPICE has broken the waveform down into a spectrum of sinusoidal frequencies up to the ninth harmonic, plus a small DC voltage labelled DC component. I had to inform SPICE of the fundamental frequency (for a square wave with a 20 millisecond period, this frequency is 50 Hz), so it knew how to classify the harmonics. Note how small the figures are for all the even harmonics (2nd, 4th, 6th, 8th), and how the amplitudes of the odd harmonics diminish (1st is largest, 9th is smallest). This same technique of “Fourier Transformation” is often used in computerized power instrumentation, sampling the AC waveform(s) and determining the harmonic content thereof. A common computer algorithm (sequence of program steps to perform a task) for this is the Fast Fourier Transform or FFT function. You need not be concerned with exactly how these computer routines work, but be aware of their existence and application. This same mathematical technique used in SPICE to analyze the harmonic content of waves can be applied to the technical analysis of music: breaking up any particular sound into its constituent sine-wave frequencies. In fact, you may have already seen a device designed to do just that without realizing what it was! A graphic equalizer is a piece of high-fidelity stereo equipment that controls (and sometimes displays) the nature of music’s harmonic content. Equipped with several knobs or slide levers, the equalizer is able to selectively attenuate (reduce) the amplitude of certain frequencies present in music, to “customize” the sound for the listener’s benefit. Typically, there will be a “bar graph” display next to each control lever, displaying the amplitude of each particular frequency. (Figure below) Hi-Fi audio graphic equalizer. A device built strictly to display—not control—the amplitudes of each frequency range for a mixed-frequency signal is typically called a spectrum analyzer. The design of spectrum analyzers may be as simple as a set of “filter” circuits (see the next chapter for details) designed to separate the different frequencies from each other, or as complex as a special-purpose digital computer running an FFT algorithm to mathematically split the signal into its harmonic components. Spectrum analyzers are often designed to analyze extremely high-frequency signals, such as those produced by radio transmitters and computer network hardware. In that form, they often have an appearance like that of an oscilloscope: (Figure below) Spectrum analyzer shows amplitude as a function of frequency. Like an oscilloscope, the spectrum analyzer uses a CRT (or a computer display mimicking a CRT) to display a plot of the signal. Unlike an oscilloscope, this plot is amplitude over frequency rather than amplitude over time. In essence, a frequency analyzer gives the operator a Bode plot of the signal: something an engineer might call a frequency-domain rather than a time-domain analysis. The term “domain” is mathematical: a sophisticated word to describe the horizontal axis of a graph. Thus, an oscilloscope’s plot of amplitude (vertical) over time (horizontal) is a “time-domain” analysis, whereas a spectrum analyzer’s plot of amplitude (vertical) over frequency (horizontal) is a “frequency-domain” analysis. When we use SPICE to plot signal amplitude (either voltage or current amplitude) over a range of frequencies, we are performing frequency-domain analysis. Please take note of how the Fourier analysis from the last SPICE simulation isn’t “perfect.” Ideally, the amplitudes of all the even harmonics should be absolutely zero, and so should the DC component. Again, this is not so much a quirk of SPICE as it is a property of waveforms in general. A waveform of infinite duration (infinite number of cycles) can be analyzed with absolute precision, but the less cycles available to the computer for analysis, the less precise the analysis. It is only when we have an equation describing a waveform in its entirety that Fourier analysis can reduce it to a definite series of sinusoidal waveforms. The fewer times that a wave cycles, the less certain its frequency is. Taking this concept to its logical extreme, a short pulse—a waveform that doesn’t even complete a cycle—actually has no frequency, but rather acts as an infinite range of frequencies. This principle is common to all wave-based phenomena, not just AC voltages and currents. Suffice it to say that the number of cycles and the certainty of a waveform’s frequency component(s) are directly related. We could improve the precision of our analysis here by letting the wave oscillate on and on for many cycles, and the result would be a spectrum analysis more consistent with the ideal. In the following analysis, I’ve omitted the waveform plot for brevity’s sake—its just a really long square wave: Improved fourier analysis. Notice how this analysis (Figure above) shows less of a DC component voltage and lower amplitudes for each of the even harmonic frequency sine waves, all because we let the computer sample more cycles of the wave. Again, the imprecision of the first analysis is not so much a flaw in SPICE as it is a fundamental property of waves and of signal analysis. Review • Square waves are equivalent to a sine wave at the same (fundamental) frequency added to an infinite series of odd-multiple sine-wave harmonics at decreasing amplitudes. • Computer algorithms exist which are able to sample waveshapes and determine their constituent sinusoidal components. The Fourier Transform algorithm (particularly the Fast Fourier Transform, or FFT) is commonly used in computer circuit simulation programs such as SPICE and in electronic metering equipment for determining power quality.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/08%3A_Mixed-Frequency_AC_Signals/8.02%3A_Square_Wave_Signals.txt
As strange as it may seem, any repeating, non-sinusoidal waveform is actually equivalent to a series of sinusoidal waveforms of different amplitudes and frequencies added together. Square waves are a very common and well-understood case, but not the only one. Electronic power control devices such as transistors and silicon-controlled rectifiers (SCRs) often produce voltage and current waveforms that are essentially chopped-up versions of the otherwise “clean” (pure) sine-wave AC from the power supply. These devices have the ability to suddenly change their resistance with the application of a control signal voltage or current, thus “turning on” or “turning off” almost instantaneously, producing current waveforms bearing little resemblance to the source voltage waveform powering the circuit. These current waveforms then produce changes in the voltage waveform to other circuit components, due to voltage drops created by the non-sinusoidal current through circuit impedances. Circuit components that distort the normal sine-wave shape of AC voltage or current are called nonlinear. Nonlinear components such as SCRs find popular use in power electronics due to their ability to regulate large amounts of electrical power without dissipating much heat. While this is an advantage from the perspective of energy efficiency, the waveshape distortions they introduce can cause problems. These non-sinusoidal waveforms, regardless of their actual shape, are equivalent to a series of sinusoidal waveforms of higher (harmonic) frequencies. If not taken into consideration by the circuit designer, these harmonic waveforms created by electronic switching components may cause erratic circuit behavior. It is becoming increasingly common in the electric power industry to observe overheating of transformers and motors due to distortions in the sine-wave shape of the AC power line voltage stemming from “switching” loads such as computers and high-efficiency lights. This is no theoretical exercise: it is very real and potentially very troublesome. In this section, I will investigate a few of the more common waveshapes and show their harmonic components by way of Fourier analysis using SPICE. One very common way harmonics are generated in an AC power system is when AC is converted, or “rectified” into DC. This is generally done with components called diodes, which only allow the passage of current in one direction. The simplest type of AC/DC rectification is half-wave, where a single diode blocks half of the AC current (over time) from passing through the load. (Figure below) Oddly enough, the conventional diode schematic symbol is drawn such that electrons flow against the direction of the symbol’s arrowhead: Half-wave rectifier. Half-wave rectifier waveforms. V(1)+0.4 shifts the sinewave input V(1) up for clarity. This is not part of the simulation. First, we’ll see how SPICE analyzes the source waveform, a pure sine wave voltage: (Figure below) Fourier analysis of the sine wave input. Notice the extremely small harmonic and DC components of this sinusoidal waveform in the table above, though, too small to show on the harmonic plot above. Ideally, there would be nothing but the fundamental frequency showing (being a perfect sine wave), but our Fourier analysis figures aren’t perfect because SPICE doesn’t have the luxury of sampling a waveform of infinite duration. Next, we’ll compare this with the Fourier analysis of the half-wave “rectified” voltage across the load resistor: (Figure below) Fourier analysis half-wave output. Notice the relatively large even-multiple harmonics in this analysis. By cutting out half of our AC wave, we’ve introduced the equivalent of several higher-frequency sinusoidal (actually, cosine) waveforms into our circuit from the original, pure sine-wave. Also take note of the large DC component: 4.456 volts. Because our AC voltage waveform has been “rectified” (only allowed to push in one direction across the load rather than back-and-forth), it behaves a lot more like DC. Another method of AC/DC conversion is called full-wave (Figure below), which as you may have guessed utilizes the full cycle of AC power from the source, reversing the polarity of half the AC cycle to get electrons to flow through the load the same direction all the time. I won’t bore you with details of exactly how this is done, but we can examine the waveform (Figure below) and its harmonic analysis through SPICE: (Figure below) Full-wave rectifier circuit. Waveforms for full-wave rectifier Fourier analysis of full-wave rectifier output. What a difference! According to SPICE’s Fourier transform, we have a 2nd harmonic component to this waveform that’s over 85 times the amplitude of the original AC source frequency! The DC component of this wave shows up as being 8.273 volts (almost twice what is was for the half-wave rectifier circuit) while the second harmonic is almost 6 volts in amplitude. Notice all the other harmonics further on down the table. The odd harmonics are actually stronger at some of the higher frequencies than they are at the lower frequencies, which is interesting. As you can see, what may begin as a neat, simple AC sine-wave may end up as a complex mess of harmonics after passing through just a few electronic components. While the complex mathematics behind all this Fourier transformation is not necessary for the beginning student of electric circuits to understand, it is of the utmost importance to realize the principles at work and to grasp the practical effects that harmonic signals may have on circuits. The practical effects of harmonic frequencies in circuits will be explored in the last section of this chapter, but before we do that we’ll take a closer look at waveforms and their respective harmonics. Review • Any waveform at all, so long as it is repetitive, can be reduced to a series of sinusoidal waveforms added together. Different waveshapes consist of different blends of sine-wave harmonics. • Rectification of AC to DC is a very common source of harmonics within industrial power systems.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/08%3A_Mixed-Frequency_AC_Signals/8.03%3A_Other_Waveshapes.txt
Computerized Fourier analysis, particularly in the form of the FFT algorithm, is a powerful tool for furthering our understanding of waveforms and their related spectral components. This same mathematical routine programmed into the SPICE simulator as the .fourier option is also programmed into a variety of electronic test instruments to perform real-time Fourier analysis on measured signals. This section is devoted to the use of such tools and the analysis of several different waveforms. First we have a simple sine wave at a frequency of 523.25 Hz. This particular frequency value is a “C” pitch on a piano keyboard, one octave above “middle C”. Actually, the signal measured for this demonstration was created by an electronic keyboard set to produce the tone of a panflute, the closest instrument “voice” I could find resembling a perfect sine wave. The plot below was taken from an oscilloscope display, showing signal amplitude (voltage) over time: (Figure below) Oscilloscope display: voltage vs time. Viewed with an oscilloscope, a sine wave looks like a wavy curve traced horizontally on the screen. The horizontal axis of this oscilloscope display is marked with the word “Time” and an arrow pointing in the direction of time’s progression. The curve itself, of course, represents the cyclic increase and decrease of voltage over time. Close observation reveals imperfections in the sine-wave shape. This, unfortunately, is a result of the specific equipment used to analyze the waveform. Characteristics like these due to quirks of the test equipment are technically known as artifacts: phenomena existing solely because of a peculiarity in the equipment used to perform the experiment. If we view this same AC voltage on a spectrum analyzer, the result is quite different: (Figure below) Spectrum analyzer display: voltage vs frequency. As you can see, the horizontal axis of the display is marked with the word “Frequency,” denoting the domain of this measurement. The single peak on the curve represents the predominance of a single frequency within the range of frequencies covered by the width of the display. If the scale of this analyzer instrument were marked with numbers, you would see that this peak occurs at 523.25 Hz. The height of the peak represents the signal amplitude (voltage). If we mix three different sine-wave tones together on the electronic keyboard (C-E-G, a C-major chord) and measure the result, both the oscilloscope display and the spectrum analyzer display reflect this increased complexity: (Figure below) Oscilloscape display: three tones. The oscilloscope display (time-domain) shows a waveform with many more peaks and valleys than before, a direct result of the mixing of these three frequencies. As you will notice, some of these peaks are higher than the peaks of the original single-pitch waveform, while others are lower. This is a result of the three different waveforms alternately reinforcing and canceling each other as their respective phase shifts change in time. Spectrum analyzer display: three tones. The spectrum display (frequency-domain) is much easier to interpret: each pitch is represented by its own peak on the curve. (Figure above) The difference in height between these three peaks is another artifact of the test equipment: a consequence of limitations within the equipment used to generate and analyze these waveforms, and not a necessary characteristic of the musical chord itself. As was stated before, the device used to generate these waveforms is an electronic keyboard: a musical instrument designed to mimic the tones of many different instruments. The panflute “voice” was chosen for the first demonstrations because it most closely resembled a pure sine wave (a single frequency on the spectrum analyzer display). Other musical instrument “voices” are not as simple as this one, though. In fact, the unique tone produced by any instrument is a function of its waveshape (or spectrum of frequencies). For example, let’s view the signal for a trumpet tone: (Figure below) Oscilloscope display: waveshape of a trumpet tone. The fundamental frequency of this tone is the same as in the first panflute example: 523.25 Hz, one octave above “middle C.” The waveform itself is far from a pure and simple sine-wave form. Knowing that any repeating, non-sinusoidal waveform is equivalent to a series of sinusoidal waveforms at different amplitudes and frequencies, we should expect to see multiple peaks on the spectrum analyzer display: (Figure below) Spectrum of a trumpet tone. Indeed we do! The fundamental frequency component of 523.25 Hz is represented by the left-most peak, with each successive harmonic represented as its own peak along the width of the analyzer screen. The second harmonic is twice the frequency of the fundamental (1046.5 Hz), the third harmonic three times the fundamental (1569.75 Hz), and so on. This display only shows the first six harmonics, but there are many more comprising this complex tone. Trying a different instrument voice (the accordion) on the keyboard, we obtain a similarly complex oscilloscope (time-domain) plot (Figure below) and spectrum analyzer (frequency-domain) display: (Figure below) Oscilloscope display: waveshape of accordion tone. Spectrum of accordion tone. Note the differences in relative harmonic amplitudes (peak heights) on the spectrum displays for trumpet and accordion. Both instrument tones contain harmonics all the way from 1st (fundamental) to 6th (and beyond!), but the proportions aren’t the same. Each instrument has a unique harmonic “signature” to its tone. Bear in mind that all this complexity is in reference to a single note played with these two instrument “voices.” Multiple notes played on an accordion, for example, would create a much more complex mixture of frequencies than what is seen here. The analytical power of the oscilloscope and spectrum analyzer permit us to derive general rules about waveforms and their harmonic spectra from real waveform examples. We already know that any deviation from a pure sine-wave results in the equivalent of a mixture of multiple sine-wave waveforms at different amplitudes and frequencies. However, close observation allows us to be more specific than this. Note, for example, the time- (Figure below) and frequency-domain (Figure below) plots for a waveform approximating a square wave: Oscilloscope time-domain display of a square wave Spectrum (frequency-domain) of a square wave. According to the spectrum analysis, this waveform contains no even harmonics, only odd. Although this display doesn’t show frequencies past the sixth harmonic, the pattern of odd-only harmonics in descending amplitude continues indefinitely. This should come as no surprise, as we’ve already seen with SPICE that a square wave is comprised of an infinitude of odd harmonics. The trumpet and accordion tones, however, contained both even and odd harmonics. This difference in harmonic content is noteworthy. Let’s continue our investigation with an analysis of a triangle wave: (Figure below) Oscilloscope time-domain display of a triangle wave. Spectrum of a triangle wave. In this waveform there are practically no even harmonics: (Figure above) the only significant frequency peaks on the spectrum analyzer display belong to odd-numbered multiples of the fundamental frequency. Tiny peaks can be seen for the second, fourth, and sixth harmonics, but this is due to imperfections in this particular triangle waveshape (once again, artifacts of the test equipment used in this analysis). A perfect triangle waveshape produces no even harmonics, just like a perfect square wave. It should be obvious from inspection that the harmonic spectrum of the triangle wave is not identical to the spectrum of the square wave: the respective harmonic peaks are of different heights. However, the two different waveforms are common in their lack of even harmonics. Let’s examine another waveform, this one very similar to the triangle wave, except that its rise-time is not the same as its fall-time. Known as a sawtooth wave, its oscilloscope plot reveals it to be aptly named: (Figure below) Time-domain display of a sawtooth wave. When the spectrum analysis of this waveform is plotted, we see a result that is quite different from that of the regular triangle wave, for this analysis shows the strong presence of even-numbered harmonics (second and fourth): (Figure below) Frequency-domain display of a sawtooth wave. The distinction between a waveform having even harmonics versus no even harmonics resides in the difference between a triangle waveshape and a sawtooth waveshape. That difference is symmetry above and below the horizontal centerline of the wave. A waveform that is symmetrical above and below its centerline (the shape on both sides mirror each other precisely) will contain no even-numbered harmonics. (Figure below) Waveforms symmetric about their x-axis center line contain only odd harmonics. Square waves, triangle waves, and pure sine waves all exhibit this symmetry, and all are devoid of even harmonics. Waveforms like the trumpet tone, the accordion tone, and the sawtooth wave are unsymmetrical around their centerlines and therefore do contain even harmonics. (Figure below) Asymmetric waveforms contain even harmonics. This principle of centerline symmetry should not be confused with symmetry around the zero line. In the examples shown, the horizontal centerline of the waveform happens to be zero volts on the time-domain graph, but this has nothing to do with harmonic content. This rule of harmonic content (even harmonics only with unsymmetrical waveforms) applies whether or not the waveform is shifted above or below zero volts with a “DC component.” For further clarification, I will show the same sets of waveforms, shifted with DC voltage, and note that their harmonic contents are unchanged. (Figure below) These waveforms are composed exclusively of odd harmonics. Again, the amount of DC voltage present in a waveform has nothing to do with that waveform’s harmonic frequency content. (Figure below) These waveforms contain even harmonics. Why is this harmonic rule-of-thumb an important rule to know? It can help us comprehend the relationship between harmonics in AC circuits and specific circuit components. Since most sources of sine-wave distortion in AC power circuits tend to be symmetrical, even-numbered harmonics are rarely seen in those applications. This is good to know if you’re a power system designer and are planning ahead for harmonic reduction: you only have to concern yourself with mitigating the odd harmonic frequencies, even harmonics being practically nonexistent. Also, if you happen to measure even harmonics in an AC circuit with a spectrum analyzer or frequency meter, you know that something in that circuit must be unsymmetrically distorting the sine-wave voltage or current, and that clue may be helpful in locating the source of a problem (look for components or conditions more likely to distort one half-cycle of the AC waveform more than the other). Now that we have this rule to guide our interpretation of nonsinusoidal waveforms, it makes more sense that a waveform like that produced by a rectifier circuit should contain such strong even harmonics, there being no symmetry at all above and below center. Review • Waveforms that are symmetrical above and below their horizontal centerlines contain no even-numbered harmonics. • The amount of DC “bias” voltage present (a waveform’s “DC component”) has no impact on that wave’s harmonic frequency content.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/08%3A_Mixed-Frequency_AC_Signals/8.04%3A_More_on_Spectrum_Analysis.txt
The principle of non-sinusoidal, repeating waveforms being equivalent to a series of sine waves at different frequencies is a fundamental property of waves in general and it has great practical import in the study of AC circuits. It means that any time we have a waveform that isn’t perfectly sine-wave-shaped, the circuit in question will react as though its having an array of different frequency voltages imposed on it at once. When an AC circuit is subjected to a source voltage consisting of a mixture of frequencies, the components in that circuit respond to each constituent frequency in a different way. Any reactive component such as a capacitor or an inductor will simultaneously present a unique amount of impedance to each and every frequency present in a circuit. Thankfully, the analysis of such circuits is made relatively easy by applying the Superposition Theorem, regarding the multiple-frequency source as a set of single-frequency voltage sources connected in series, and analyzing the circuit for one source at a time, summing the results at the end to determine the aggregate total: Circuit driven by a combination of frequencies: 60 Hz and 90 Hz. Analyzing circuit for 60 Hz source alone: Circuit for solving 60 Hz. Analyzing the circuit for 90 Hz source alone: Circuit of solving 90 Hz. Superimposing the voltage drops across R and C, we get: Because the two voltages across each component are at different frequencies, we cannot consolidate them into a single voltage figure as we could if we were adding together two voltages of different amplitude and/or phase angle at the same frequency. Complex number notation give us the ability to represent waveform amplitude (polar magnitude) and phase angle (polar angle), but not frequency. What we can tell from this application of the superposition theorem is that there will be a greater 60 Hz voltage dropped across the capacitor than a 90 Hz voltage. Just the opposite is true for the resistor’s voltage drop. This is worthy to note, especially in light of the fact that the two source voltages are equal. It is this kind of unequal circuit response to signals of differing frequency that will be our specific focus in the next chapter. We can also apply the superposition theorem to the analysis of a circuit powered by a non-sinusoidal voltage, such as a square wave. If we know the Fourier series (multiple sine/cosine wave equivalent) of that wave, we can regard it as originating from a series-connected string of multiple sinusoidal voltage sources at the appropriate amplitudes, frequencies, and phase shifts. Needless to say, this can be a laborious task for some waveforms (an accurate square-wave Fourier Series is considered to be expressed out to the ninth harmonic, or five sine waves in all!), but it is possible. I mention this not to scare you, but to inform you of the potential complexity lurking behind seemingly simple waveforms. A real-life circuit will respond just the same to being powered by a square wave as being powered by an infinite series of sine waves of odd-multiple frequencies and diminishing amplitudes. This has been known to translate into unexpected circuit resonances, transformer and inductor core overheating due to eddy currents, electromagnetic noise over broad ranges of the frequency spectrum, and the like. Technicians and engineers need to be made aware of the potential effects of non-sinusoidal waveforms in reactive circuits. Harmonics are known to manifest their effects in the form of electromagnetic radiation as well. Studies have been performed on the potential hazards of using portable computers aboard passenger aircraft, citing the fact that computers’ high frequency square-wave “clock” voltage signals are capable of generating radio waves that could interfere with the operation of the aircraft’s electronic navigation equipment. It’s bad enough that typical microprocessor clock signal frequencies are within the range of aircraft radio frequency bands, but worse yet is the fact that the harmonic multiples of those fundamental frequencies span an even larger range, due to the fact that clock signal voltages are square-wave in shape and not sine-wave. Electromagnetic “emissions” of this nature can be a problem in industrial applications, too, with harmonics abounding in very large quantities due to (nonlinear) electronic control of motor and electric furnace power. The fundamental power line frequency may only be 60 Hz, but those harmonic frequency multiples theoretically extend into infinitely high frequency ranges. Low frequency power line voltage and current doesn’t radiate into space very well as electromagnetic energy, but high frequencies do. Also, capacitive and inductive “coupling” caused by close-proximity conductors is usually more severe at high frequencies. Signal wiring nearby power wiring will tend to “pick up” harmonic interference from the power wiring to a far greater extent than pure sine-wave interference. This problem can manifest itself in industry when old motor controls are replaced with new, solid-state electronic motor controls providing greater energy efficiency. Suddenly there may be weird electrical noise being impressed upon signal wiring that never used to be there, because the old controls never generated harmonics, and those high-frequency harmonic voltages and currents tend to inductively and capacitively “couple” better to nearby conductors than any 60 Hz signals from the old controls used to. Review • Any regular (repeating), non-sinusoidal waveform is equivalent to a particular series of sine/cosine waves of different frequencies, phases, and amplitudes, plus a DC offset voltage if necessary. The mathematical process for determining the sinusoidal waveform equivalent for any waveform is called Fourier analysis. • Multiple-frequency voltage sources can be simulated for analysis by connecting several single-frequency voltage sources in series. Analysis of voltages and currents is accomplished by using the superposition theorem. NOTE: superimposed voltages and currents of different frequencies cannot be added together in complex number form, since complex numbers only account for amplitude and phase shift, not frequency! • Harmonics can cause problems by impressing unwanted (“noise”) voltage signals upon nearby circuits. These unwanted signals may come by way of capacitive coupling, inductive coupling, electromagnetic radiation, or a combination thereof.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/08%3A_Mixed-Frequency_AC_Signals/8.05%3A_Circuit_Effects.txt
It is sometimes desirable to have circuits capable of selectively filtering one frequency or range of frequencies out of a mix of different frequencies in a circuit. A circuit designed to perform this frequency selection is called a filter circuit, or simply a filter. A common need for filter circuits is in high-performance stereo systems, where certain ranges of audio frequencies need to be amplified or suppressed for best sound quality and power efficiency. You may be familiar with equalizers, which allow the amplitudes of several frequency ranges to be adjusted to suit the listener’s taste and acoustic properties of the listening area. You may also be familiar with crossover networks, which block certain ranges of frequencies from reaching speakers. A tweeter (high-frequency speaker) is inefficient at reproducing low-frequency signals such as drum beats, so a crossover circuit is connected between the tweeter and the stereo’s output terminals to block low-frequency signals, only passing high-frequency signals to the speaker’s connection terminals. This gives better audio system efficiency and thus better performance. Both equalizers and crossover networks are examples of filters, designed to accomplish filtering of certain frequencies. Another practical application of filter circuits is in the “conditioning” of non-sinusoidal voltage waveforms in power circuits. Some electronic devices are sensitive to the presence of harmonics in the power supply voltage, and so require power conditioning for proper operation. If a distorted sine-wave voltage behaves like a series of harmonic waveforms added to the fundamental frequency, then it should be possible to construct a filter circuit that only allows the fundamental waveform frequency to pass through, blocking all (higher-frequency) harmonics. We will be studying the design of several elementary filter circuits in this lesson. To reduce the load of math on the reader, I will make extensive use of SPICE as an analysis tool, displaying Bode plots (amplitude versus frequency) for the various kinds of filters. Bear in mind, though, that these circuits can be analyzed over several points of frequency by repeated series-parallel analysis, much like the previous example with two sources (60 and 90 Hz), if the student is willing to invest a lot of time working and re-working circuit calculations for each frequency. Review • A filter is an AC circuit that separates some frequencies from others within mixed-frequency signals. • Audio equalizers and crossover networks are two well-known applications of filter circuits. • A Bode plot is a graph plotting waveform amplitude or phase on one axis and frequency on the other. 9.02: Low-pass Filters By definition, a low-pass filter is a circuit offering easy passage to low-frequency signals and difficult passage to high-frequency signals. There are two basic kinds of circuits capable of accomplishing this objective, and many variations of each one: The inductive low-pass filter in Figure below and the capacitive low-pass filter in Figure below Inductive low-pass filter The inductor’s impedance increases with increasing frequency. This high impedance in series tends to block high-frequency signals from getting to the load. This can be demonstrated with a SPICE analysis: (Figure below) The response of an inductive low-pass filter falls off with increasing frequency. Capacitive low-pass filter. The capacitor’s impedance decreases with increasing frequency. This low impedance in parallel with the load resistance tends to short out high-frequency signals, dropping most of the voltage across series resistor R1. (Figure below) The response of a capacitive low-pass filter falls off with increasing frequency. The inductive low-pass filter is the pinnacle of simplicity, with only one component comprising the filter. The capacitive version of this filter is not that much more complex, with only a resistor and capacitor needed for operation. However, despite their increased complexity, capacitive filter designs are generally preferred over inductive because capacitors tend to be “purer” reactive components than inductors and therefore are more predictable in their behavior. By “pure” I mean that capacitors exhibit little resistive effects than inductors, making them almost 100% reactive. Inductors, on the other hand, typically exhibit significant dissipative (resistor-like) effects, both in the long lengths of wire used to make them, and in the magnetic losses of the core material. Capacitors also tend to participate less in “coupling” effects with other components (generate and/or receive interference from other components via mutual electric or magnetic fields) than inductors, and are less expensive. However, the inductive low-pass filter is often preferred in AC-DC power supplies to filter out the AC “ripple” waveform created when AC is converted (rectified) into DC, passing only the pure DC component. The primary reason for this is the requirement of low filter resistance for the output of such a power supply. A capacitive low-pass filter requires an extra resistance in series with the source, whereas the inductive low-pass filter does not. In the design of a high-current circuit like a DC power supply where additional series resistance is undesirable, the inductive low-pass filter is the better design choice. On the other hand, if low weight and compact size are higher priorities than low internal supply resistance in a power supply design, the capacitive low-pass filter might make more sense. All low-pass filters are rated at a certain cutoff frequency. That is, the frequency above which the output voltage falls below 70.7% of the input voltage. This cutoff percentage of 70.7 is not really arbitrary, all though it may seem so at first glance. In a simple capacitive/resistive low-pass filter, it is the frequency at which capacitive reactance in ohms equals resistance in ohms. In a simple capacitive low-pass filter (one resistor, one capacitor), the cutoff frequency is given as: Inserting the values of R and C from the last SPICE simulation into this formula, we arrive at a cutoff frequency of 45.473 Hz. However, when we look at the plot generated by the SPICE simulation, we see the load voltage well below 70.7% of the source voltage (1 volt) even at a frequency as low as 30 Hz, below the calculated cutoff point. What’s wrong? The problem here is that the load resistance of 1 kΩ affects the frequency response of the filter, skewing it down from what the formula told us it would be. Without that load resistance in place, SPICE produces a Bode plot whose numbers make more sense: (Figure below) For the capacitive low-pass filter with R = 500 Ω and C = 7 µF, the Output should be 70.7% at 45.473 Hz. When dealing with filter circuits, it is always important to note that the response of the filter depends on the filter’s component values and the impedance of the load. If a cutoff frequency equation fails to give consideration to load impedance, it assumes no load and will fail to give accurate results for a real-life filter conducting power to a load. One frequent application of the capacitive low-pass filter principle is in the design of circuits having components or sections sensitive to electrical “noise.” As mentioned at the beginning of the last chapter, sometimes AC signals can “couple” from one circuit to another via capacitance (Cstray) and/or mutual inductance (Mstray) between the two sets of conductors. A prime example of this is unwanted AC signals (“noise”) becoming impressed on DC power lines supplying sensitive circuits: (Figure below) Noise is coupled by stray capacitance and mutual inductance into “clean” DC power. The oscilloscope-meter on the left shows the “clean” power from the DC voltage source. After coupling with the AC noise source via stray mutual inductance and stray capacitance, though, the voltage as measured at the load terminals is now a mix of AC and DC, the AC being unwanted. Normally, one would expect Eload to be precisely identical to Esource, because the uninterrupted conductors connecting them should make the two sets of points electrically common. However, power conductor impedance allows the two voltages to differ, which means the noise magnitude can vary at different points in the DC system. If we wish to prevent such “noise” from reaching the DC load, all we need to do is connect a low-pass filter near the load to block any coupled signals. In its simplest form, this is nothing more than a capacitor connected directly across the power terminals of the load, the capacitor behaving as a very low impedance to any AC noise, and shorting it out. Such a capacitor is called a decoupling capacitor: (Figure below) Decoupling capacitor, applied to load, filters noise from DC power supply. A cursory glance at a crowded printed-circuit board (PCB) will typically reveal decoupling capacitors scattered throughout, usually located as close as possible to the sensitive DC loads. Capacitor size is usually 0.1 µF or more, a minimum amount of capacitance needed to produce a low enough impedance to short out any noise. Greater capacitance will do a better job at filtering noise, but size and economics limit decoupling capacitors to meager values. Review • A low-pass filter allows for easy passage of low-frequency signals from source to load, and difficult passage of high-frequency signals. • Inductive low-pass filters insert an inductor in series with the load; capacitive low-pass filters insert a resistor in series and a capacitor in parallel with the load. The former filter design tries to “block” the unwanted frequency signal while the latter tries to short it out. • The cutoff frequency for a low-pass filter is that frequency at which the output (load) voltage equals 70.7% of the input (source) voltage. Above the cutoff frequency, the output voltage is lower than 70.7% of the input, and vice versa.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/09%3A_Filters/9.01%3A_What_is_a_Filter%3F.txt
A high-pass filter’s task is just the opposite of a low-pass filter: to offer easy passage of a high-frequency signal and difficult passage to a low-frequency signal. As one might expect, the inductive (Figure below) and capacitive (Figure below) versions of the high-pass filter are just the opposite of their respective low-pass filter designs: Capacitive high-pass filter. The Capacitor’s Impedance The capacitor’s impedance (Figure above) increases with decreasing frequency. (Figure below) This high impedance in series tends to block low-frequency signals from getting to load. The response of the capacitive high-pass filter increases with frequency. Inductive high-pass filter. The Inductor’s Impedance The inductor’s impedance (Figure above) decreases with decreasing frequency. (Figure below) This low impedance in parallel tends to short out low-frequency signals from getting to the load resistor. As a consequence, most of the voltage gets dropped across series resistor R1. The response of the inductive high-pass filter increases with frequency. This time, the capacitive design is the simplest, requiring only one component above and beyond the load. And, again, the reactive purity of capacitors over inductors tends to favor their use in filter design, especially with high-pass filters where high frequencies commonly cause inductors to behave strangely due to the skin effect and electromagnetic core losses. As with low-pass filters, high-pass filters have a rated cutoff frequency, above which the output voltage increases above 70.7% of the input voltage. Just as in the case of the capacitive low-pass filter circuit, the capacitive high-pass filter’s cutoff frequency can be found with the same formula: In the example circuit, there is no resistance other than the load resistor, so that is the value for R in the formula. Using a stereo system as a practical example, a capacitor connected in series with the tweeter (treble) speaker will serve as a high-pass filter, imposing a high impedance to low-frequency bass signals, thereby preventing that power from being wasted on a speaker inefficient for reproducing such sounds. In like fashion, an inductor connected in series with the woofer (bass) speaker will serve as a low-pass filter for the low frequencies that particular speaker is designed to reproduce. In this simple example circuit, the midrange speaker is subjected to the full spectrum of frequencies from the stereo’s output. More elaborate filter networks are sometimes used, but this should give you the general idea. Also bear in mind that I’m only showing you one channel (either left or right) on this stereo system. A real stereo would have six speakers: 2 woofers, 2 midranges, and 2 tweeters. High-pass filter routes high frequencies to tweeter, while low-pass filter routes lows to woofer. For better performance yet, we might like to have some kind of filter circuit capable of passing frequencies that are between low (bass) and high (treble) to the midrange speaker so that none of the low- or high-frequency signal power is wasted on a speaker incapable of efficiently reproducing those sounds. What we would be looking for is called a band-pass filter, which is the topic of the next section. Review • A high-pass filter allows for easy passage of high-frequency signals from source to load, and difficult passage of low-frequency signals. • Capacitive high-pass filters insert a capacitor in series with the load; inductive high-pass filters insert a resistor in series and an inductor in parallel with the load. The former filter design tries to “block” the unwanted frequency signal while the latter tries to short it out. • The cutoff frequency for a high-pass filter is that frequency at which the output (load) voltage equals 70.7% of the input (source) voltage. Above the cutoff frequency, the output voltage is greater than 70.7% of the input, and vice versa.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/09%3A_Filters/9.03%3A_High-pass_Filters.txt
How to Create Band-pass Filter? There are applications where a particular band, or spread, or frequencies need to be filtered from a wider range of mixed signals. Filter circuits can be designed to accomplish this task by combining the properties of low-pass and high-pass into a single filter. The result is called a band-pass filter. Creating a bandpass filter from a low-pass and high-pass filter can be illustrated using block diagrams: (Figure below) System level block diagram of a band-pass filter. What emerges from the series combination of these two filter circuits is a circuit that will only allow passage of those frequencies that are neither too high nor too low. Using real components, here is what a typical schematic might look like Figure below. The response of the band-pass filter is shown in (Figure below) Capacitive band-pass filter. The response of a capacitive bandpass filter peaks within a narrow frequency range. Design a Band-pass Filter Using Inductors Band-pass filters can also be constructed using inductors, but as mentioned before, the reactive “purity” of capacitors gives them a design advantage. If we were to design a bandpass filter using inductors, it might look something like Figure below. Inductive band-pass filter. The fact that the high-pass section comes “first” in this design instead of the low-pass section makes no difference in its overall operation. It will still filter out all frequencies too high or too low. While the general idea of combining low-pass and high-pass filters together to make a bandpass filter is sound, it is not without certain limitations. Because this type of band-pass filter works by relying on either section to block unwanted frequencies, it can be difficult to design such a filter to allow unhindered passage within the desired frequency range. Both the low-pass and high-pass sections will always be blocking signals to some extent, and their combined effort makes for an attenuated (reduced amplitude) signal at best, even at the peak of the “pass-band” frequency range. Notice the curve peak on the previous SPICE analysis: the load voltage of this filter never rises above 0.59 volts, although the source voltage is a full volt. This signal attenuation becomes more pronounced if the filter is designed to be more selective (steeper curve, narrower band of passable frequencies). There are other methods to achieve band-pass operation without sacrificing signal strength within the pass-band. We will discuss those methods a little later in this chapter. Review • A band-pass filter works to screen out frequencies that are too low or too high, giving easy passage only to frequencies within a certain range. • Band-pass filters can be made by stacking a low-pass filter on the end of a high-pass filter, or vice versa. • “Attenuate” means to reduce or diminish in amplitude. When you turn down the volume control on your stereo, you are “attenuating” the signal being sent to the speakers. 9.05: Band-stop Filters Also called band-elimination, band-reject, or notch filters, this kind of filter passes all frequencies above and below a particular range set by the component values. Not surprisingly, it can be made out of a low-pa ssand a high-pass filter, just like the band-pass design, except that this time we connect the two filter sections in parallel with each other instead of in series. (Figure below) System level block diagram of a band-stop filter. Constructed using two capacitive filter sections, it looks something like (Figure below). “Twin-T” band-stop filter. The low-pass filter section is comprised of R1, R2, and C1 in a “T” configuration. The high-pass filter section is comprised of C2, C3, and R3 in a “T” configuration as well. Together, this arrangement is commonly known as a “Twin-T” filter, giving sharp response when the component values are chosen in the following ratios: Given these component ratios, the frequency of maximum rejection (the “notch frequency”) can be calculated as follows: The impressive band-stopping ability of this filter is illustrated by the following SPICE analysis: (Figure below) Response of “twin-T” band-stop filter. Review • A band-stop filter works to screen out frequencies that are within a certain range, giving easy passage only to frequencies outside of that range. Also known as band-elimination, band-reject, or notch filters. • Band-stop filters can be made by placing a low-pass filter in parallel with a high-pass filter. Commonly, both the low-pass and high-pass filter sections are of the “T” configuration, giving the name “Twin-T” to the band-stop combination. • The frequency of maximum attenuation is called the notch frequency.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/09%3A_Filters/9.04%3A_Band-pass_Filters.txt
So far, the filter designs we’ve concentrated on have employed either capacitors or inductors, but never both at the same time. We should know by now that combinations of L and C will tend to resonate, and this property can be exploited in designing band-pass and band-stop filter circuits. Series LC circuits give minimum impedance at resonance, while parallel LC (“tank”) circuits give maximum impedance at their resonant frequency. Knowing this, we have two basic strategies for designing either band-pass or band-stop filters. For band-pass filters, the two basic resonant strategies are this: series LC to pass a signal (Figure below), or parallel LC (Figure below) to short a signal. The two schemes will be contrasted and simulated here: Series resonant LC band-pass filter. Series LC components pass signal at resonance, and block signals of any other frequencies from getting to the load. (Figure below) Series resonant band-pass filter: voltage peaks at resonant frequency of 159.15 Hz. A couple of points to note: see how there is virtually no signal attenuation within the “pass band” (the range of frequencies near the load voltage peak), unlike the band-pass filters made from capacitors or inductors alone. Also, since this filter works on the principle of series LC resonance, the resonant frequency of which is unaffected by circuit resistance, the value of the load resistor will not skew the peak frequency. However, different values for the load resistor will change the “steepness” of the Bode plot (the “selectivity” of the filter). The other basic style of resonant band-pass filters employs a tank circuit (parallel LC combination) to short out signals too high or too low in frequency from getting to the load: (Figure below) Parallel resonant band-pass filter. The tank circuit will have a lot of impedance at resonance, allowing the signal to get to the load with minimal attenuation. Under or over resonant frequency, however, the tank circuit will have a low impedance, shorting out the signal and dropping most of it across series resistor R1. (Figure below) Parallel resonant filter: voltage peaks a resonant frequency of 159.15 Hz. Just like the low-pass and high-pass filter designs relying on a series resistance and a parallel “shorting” component to attenuate unwanted frequencies, this resonant circuit can never provide full input (source) voltage to the load. That series resistance will always be dropping some amount of voltage so long as there is a load resistance connected to the output of the filter. It should be noted that this form of band-pass filter circuit is very popular in analog radio tuning circuitry, for selecting a particular radio frequency from the multitudes of frequencies available from the antenna. In most analog radio tuner circuits, the rotating dial for station selection moves a variable capacitor in a tank circuit. Variable capacitor tunes radio receiver tank circuit to select one out of many broadcast stations. The variable capacitor and air-core inductor shown in Figure above photograph of a simple radio comprise the main elements in the tank circuit filter used to discriminate one radio station’s signal from another. Just as we can use series and parallel LC resonant circuits to pass only those frequencies within a certain range, we can also use them to block frequencies within a certain range, creating a band-stop filter. Again, we have two major strategies to follow in doing this, to use either series or parallel resonance. First, we’ll look at the series variety: (Figure below) Series resonant band-stop filter. When the series LC combination reaches resonance, its very low impedance shorts out the signal, dropping it across resistor R1 and preventing its passage on to the load. (Figure below) Series resonant band-stop filter: Notch frequency = LC resonant frequency (159.15 Hz). Next, we will examine the parallel resonant band-stop filter: (Figure below) Parallel resonant band-stop filter. The parallel LC components present a high impedance at resonant frequency, thereby blocking the signal from the load at that frequency. Conversely, it passes signals to the load at any other frequencies. (Figure below) Parallel resonant band-stop filter: Notch frequency = LC resonant frequency (159.15 Hz). Once again, notice how the absence of a series resistor makes for minimum attenuation for all the desired (passed) signals. The amplitude at the notch frequency, on the other hand, is very low. In other words, this is a very “selective” filter. In all these resonant filter designs, the selectivity depends greatly upon the “purity” of the inductance and capacitance used. If there is any stray resistance (especially likely in the inductor), this will diminish the filter’s ability to finely discriminate frequencies, as well as introduce antiresonant effects that will skew the peak/notch frequency. A word of caution to those designing low-pass and high-pass filters is in order at this point. After assessing the standard RC and LR low-pass and high-pass filter designs, it might occur to a student that a better, more effective design of low-pass or high-pass filter might be realized by combining capacitive and inductive elements together like Figure below. Capacitive Inductive low-pass filter. The inductors should block any high frequencies, while the capacitor should short out any high frequencies as well, both working together to allow only low frequency signals to reach the load. At first, this seems to be a good strategy, and eliminates the need for a series resistance. However, the more insightful student will recognize that any combination of capacitors and inductors together in a circuit is likely to cause resonant effects to happen at a certain frequency. Resonance, as we have seen before, can cause strange things to happen. Let’s plot a SPICE analysis and see what happens over a wide frequency range: (Figure below) Unexpected response of L-C low-pass filter. What was supposed to be a low-pass filter turns out to be a band-pass filter with a peak somewhere around 526 Hz! The capacitance and inductance in this filter circuit are attaining resonance at that point, creating a large voltage drop around C1, which is seen at the load, regardless of L2‘s attenuating influence. The output voltage to the load at this point actually exceeds the input (source) voltage! A little more reflection reveals that if L1 and C2 are at resonance, they will impose a very heavy (very low impedance) load on the AC source, which might not be good either. We’ll run the same analysis again, only this time plotting C1‘s voltage, vm(2) in Figure below, and the source current, I(v1), along with load voltage, vm(3): Current inceases at the unwanted resonance of the L-C low-pass filter. Sure enough, we see the voltage across C1 and the source current spiking to a high point at the same frequency where the load voltage is maximum. If we were expecting this filter to provide a simple low-pass function, we might be disappointed by the results. The problem is that an L-C filter has an input impedance and an output impedance which must be matched. The voltage source impedance must match the input impedance of the filter, and the filter output impedance must be matched by “rload” for a flat response. The input and output impedance is given by the square root of (L/C). Taking the component values from (Figure below), we can find the impedance of the filter, and the required , Rg and Rload to match it. In Figure below we have added Rg = 316 Ω to the generator, and changed the load Rload from 1000 Ω to 316 Ω. Note that if we needed to drive a 1000 Ω load, the L/C ratio could have been adjusted to match that resistance. Circuit of source and load matched L-C low-pass filter. Figure below shows the “flat” response of the L-C low pass filter when the source and load impedance match the filter input and output impedances. The response of impedance matched L-C low-pass filter is nearly flat up to the cut-off frequency. The point to make in comparing the response of the unmatched filter (Figure above) to the matched filter (Figure above) is that variable load on the filter produces a considerable change in voltage. This property is directly applicable to L-C filtered power supplies– the regulation is poor. The power supply voltage changes with a change in load. This is undesirable. This poor load regulation can be mitigated by a swinging choke. This is a choke, inductor, designed to saturate when a large DC current passes through it. By saturate, we mean that the DC current creates a “too” high level of flux in the magnetic core, so that the AC component of current cannot vary the flux. Since induction is proportional to dΦ/dt, the inductance is decreased by the heavy DC current. The decrease in inductance decreases reactance XL. Decreasing reactance, reduces the voltage drop across the inductor; thus, increasing the voltage at the filter output. This improves the voltage regulation with respect to variable loads. Despite the unintended resonance, low-pass filters made up of capacitors and inductors are frequently used as final stages in AC/DC power supplies to filter the unwanted AC “ripple” voltage out of the DC converted from AC. Why is this, if this particular filter design possesses a potentially troublesome resonant point? The answer lies in the selection of filter component sizes and the frequencies encountered from an AC/DC converter (rectifier). What we’re trying to do in an AC/DC power supply filter is separate DC voltage from a small amount of relatively high-frequency AC voltage. The filter inductors and capacitors are generally quite large (several Henrys for the inductors and thousands of µF for the capacitors is typical), making the filter’s resonant frequency very, very low. DC of course, has a “frequency” of zero, so there’s no way it can make an LC circuit resonate. The ripple voltage, on the other hand, is a non-sinusoidal AC voltage consisting of a fundamental frequency at least twice the frequency of the converted AC voltage, with harmonics many times that in addition. For plug-in-the-wall power supplies running on 60 Hz AC power (60 Hz United States; 50 Hz in Europe), the lowest frequency the filter will ever see is 120 Hz (100 Hz in Europe), which is well above its resonant point. Therefore, the potentially troublesome resonant point in a such a filter is completely avoided. The following SPICE analysis calculates the voltage output (AC and DC) for such a filter, with series DC and AC (120 Hz) voltage sources providing a rough approximation of the mixed-frequency output of an AC/DC converter. AC/DC power suppply filter provides “ripple free” DC power. With a full 12 volts DC at the load and only 34.12 µV of AC left from the 1 volt AC source imposed across the load, this circuit design proves itself to be a very effective power supply filter. The lesson learned here about resonant effects also applies to the design of high-pass filters using both capacitors and inductors. So long as the desired and undesired frequencies are well to either side of the resonant point, the filter will work OK. But if any signal of significant magnitude close to the resonant frequency is applied to the input of the filter, strange things will happen! Review • Resonant combinations of capacitance and inductance can be employed to create very effective band-pass and band-stop filters without the need for added resistance in a circuit that would diminish the passage of desired frequencies. 9.07: Summary of Filters As lengthy as this chapter has been up to this point, it only begins to scratch the surface of filter design. A quick perusal of any advanced filter design textbook is sufficient to prove my point. The mathematics involved with component selection and frequency response prediction is daunting to say the least—well beyond the scope of the beginning electronics student. It has been my intent here to present the basic principles of filter design with as little math as possible, leaning on the power of the SPICE circuit analysis program to explore filter performance. The benefit of such computer simulation software cannot be understated, for the beginning student or for the working engineer. Circuit simulation software empowers the student to explore circuit designs far beyond the reach of their math skills. With the ability to generate Bode plots and precise figures, an intuitive understanding of circuit concepts can be attained, which is something often lost when a student is burdened with the task of solving lengthy equations by hand. If you are not familiar with the use of SPICE or other circuit simulation programs, take the time to become so! It will be of great benefit to your study. To see SPICE analyses presented in this book is an aid to understanding circuits, but to actually set up and analyze your own circuit simulations is a much more engaging and worthwhile endeavor as a student.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/09%3A_Filters/9.06%3A_Resonant_Filters.txt
Suppose we were to wrap a coil of insulated wire around a loop of ferromagnetic material and energize this coil with an AC voltage source: (Figure below (a)) Insulated winding on ferromagnetic loop has inductive reactance, limiting AC current. As an inductor, we would expect this iron-core coil to oppose the applied voltage with its inductive reactance, limiting current through the coil as predicted by the equations XL = 2πfL and I=E/X (or I=E/Z). For the purposes of this example, though, we need to take a more detailed look at the interactions of voltage, current, and magnetic flux in the device. Kirchhoff’s voltage law describes how the algebraic sum of all voltages in a loop must equal zero. In this example, we could apply this fundamental law of electricity to describe the respective voltages of the source and of the inductor coil. Here, as in any one-source, one-load circuit, the voltage dropped across the load must equal the voltage supplied by the source, assuming zero voltage dropped along the resistance of any connecting wires. In other words, the load (inductor coil) must produce an opposing voltage equal in magnitude to the source, in order that it may balance against the source voltage and produce an algebraic loop voltage sum of zero. From where does this opposing voltage arise? If the load were a resistor (Figure above (b)), the voltage drop originates from electrical energy loss, the “friction” of electrons flowing through the resistance. With a perfect inductor (no resistance in the coil wire), the opposing voltage comes from another mechanism: the reaction to a changing magnetic flux in the iron core. When AC current changes, flux Φ changes. Changing flux induces a counter EMF. Michael Faraday discovered the mathematical relationship between magnetic flux (Φ) and induced voltage with this equation: The instantaneous voltage (voltage dropped at any instant in time) across a wire coil is equal to the number of turns of that coil around the core (N) multiplied by the instantaneous rate-of-change in magnetic flux (dΦ/dt) linking with the coil. Graphed, (Figure below) this shows itself as a set of sine waves (assuming a sinusoidal voltage source), the flux wave 90o lagging behind the voltage wave: Magnetic flux lags applied voltage by 90o because flux is proportional to a rate of change, dΦ/dt. Magnetic flux through a ferromagnetic material is analogous to current through a conductor: it must be motivated by some force in order to occur. In electric circuits, this motivating force is voltage (a.k.a. electromotive force, or EMF). In magnetic “circuits,” this motivating force is magnetomotive force, or mmf. Magnetomotive force (mmf) and magnetic flux (Φ) are related to each other by a property of magnetic materials known as reluctance (the latter quantity symbolized by a strange-looking letter “R”): In our example, the mmf required to produce this changing magnetic flux (Φ) must be supplied by a changing current through the coil. Magnetomotive force generated by an electromagnet coil is equal to the amount of current through that coil (in amps) multiplied by the number of turns of that coil around the core (the SI unit for mmf is the amp-turn). Because the mathematical relationship between magnetic flux and mmf is directly proportional, and because the mathematical relationship between mmf and current is also directly proportional (no rates-of-change present in either equation), the current through the coil will be in-phase with the flux wave as in (Figure below) Magnetic flux, like current, lags applied voltage by 90o. This is why alternating current through an inductor lags the applied voltage waveform by 90o: because that is what is required to produce a changing magnetic flux whose rate-of-change produces an opposing voltage in-phase with the applied voltage. Due to its function in providing magnetizing force (mmf) for the core, this current is sometimes referred to as the magnetizing current. It should be mentioned that the current through an iron-core inductor is not perfectly sinusoidal (sine-wave shaped), due to the nonlinear B/H magnetization curve of iron. In fact, if the inductor is cheaply built, using as little iron as possible, the magnetic flux density might reach high levels (approaching saturation), resulting in a magnetizing current waveform that looks something like Figure below As flux density approaches saturation, the magnetizing current waveform becomes distorted. When a ferromagnetic material approaches magnetic flux saturation, disproportionately greater levels of magnetic field force (mmf) are required to deliver equal increases in magnetic field flux (Φ). Because mmf is proportional to current through the magnetizing coil (mmf = NI, where “N” is the number of turns of wire in the coil and “I” is the current through it), the large increases of mmf required to supply the needed increases in flux results in large increases in coil current. Thus, coil current increases dramatically at the peaks in order to maintain a flux waveform that isn’t distorted, accounting for the bell-shaped half-cycles of the current waveform in the above plot. The situation is further complicated by energy losses within the iron core. The effects of hysteresis and eddy currents conspire to further distort and complicate the current waveform, making it even less sinusoidal and altering its phase to be lagging slightly less than 90o behind the applied voltage waveform. This coil current resulting from the sum total of all magnetic effects in the core (dΦ/dt magnetization plus hysteresis losses, eddy current losses, etc.) is called the exciting current. The distortion of an iron-core inductor’s exciting current may be minimized if it is designed for and operated at very low flux densities. Generally speaking, this requires a core with large cross-sectional area, which tends to make the inductor bulky and expensive. For the sake of simplicity, though, we’ll assume that our example core is far from saturation and free from all losses, resulting in a perfectly sinusoidal exciting current. As we’ve seen already in the inductors chapter, having a current waveform 90o out of phase with the voltage waveform creates a condition where power is alternately absorbed and returned to the circuit by the inductor. If the inductor is perfect (no wire resistance, no magnetic core losses, etc.), it will dissipate zero power. Let us now consider the same inductor device, except this time with a second coil (Figure below) wrapped around the same iron core. The first coil will be labeled the primary coil, while the second will be labeled the secondary: Ferromagnetic core with primary coil (AC driven) and secondary coil. Mutual Induction If this secondary coil experiences the same magnetic flux change as the primary (which it should, assuming perfect containment of the magnetic flux through the common core), and has the same number of turns around the core, a voltage of equal magnitude and phase to the applied voltage will be induced along its length. In the following graph, (Figure below) the induced voltage waveform is drawn slightly smaller than the source voltage waveform simply to distinguish one from the other: Open circuited secondary sees the same flux Φ as the primary. Therefore induced secondary voltage es is the same magnitude and phase as the primary voltage ep. This effect is called mutual inductance: the induction of a voltage in one coil in response to a change in current in the other coil. Like normal (self-) inductance, it is measured in the unit of Henrys, but unlike normal inductance, it is symbolized by the capital letter “M” rather than the letter “L”: No current will exist in the secondary coil, since it is open-circuited. However, if we connect a load resistor to it, an alternating current will go through the coil, in-phase with the induced voltage (because the voltage across a resistor and the current through it are always in-phase with each other). (Figure below) Resistive load on secondary has voltage and current in-phase. At first, one might expect this secondary coil current to cause additional magnetic flux in the core. In fact, it does not. If more flux were induced in the core, it would cause more voltage to be induced voltage in the primary coil (remember that e = dΦ/dt). This cannot happen, because the primary coil’s induced voltage must remain at the same magnitude and phase in order to balance with the applied voltage, in accordance with Kirchhoff’s voltage law. Consequently, the magnetic flux in the core cannot be affected by secondary coil current. However, what does change is the amount of mmf in the magnetic circuit. Magnetomotive Force Magnetomotive force is produced any time electrons move through a wire. Usually, this mmf is accompanied by magnetic flux, in accordance with the mmf=ΦR “magnetic Ohm’s Law” equation. In this case, though, additional flux is not permitted, so the only way the secondary coil’s mmf may exist is if a counteracting mmf is generated by the primary coil, of equal magnitude and opposite phase. Indeed, this is what happens, an alternating current forming in the primary coil—180o out of phase with the secondary coil’s current—to generate this counteracting mmf and prevent additional core flux. Polarity marks and current direction arrows have been added to the illustration to clarify phase relations: (Figure below) Flux remains constant with application of a load. However, a counteracting mmf is produced by the loaded secondary. If you find this process a bit confusing, do not worry. Transformer dynamics is a complex subject. What is important to understand is this: when an AC voltage is applied to the primary coil, it creates a magnetic flux in the core, which induces AC voltage in the secondary coil in-phase with the source voltage. Any current drawn through the secondary coil to power a load induces a corresponding current in the primary coil, drawing current from the source. Mutual Inductance and Transformers Notice how the primary coil is behaving as a load with respect to the AC voltage source, and how the secondary coil is behaving as a source with respect to the resistor. Rather than energy merely being alternately absorbed and returned the primary coil circuit, energy is now being coupled to the secondary coil where it is delivered to a dissipative (energy-consuming) load. As far as the source “knows,” its directly powering the resistor. Of course, there is also an additional primary coil current lagging the applied voltage by 90o, just enough to magnetize the core to create the necessary voltage for balancing against the source (the exciting current). We call this type of device a transformer, because it transforms electrical energy into magnetic energy, then back into electrical energy again. Because its operation depends on electromagnetic induction between two stationary coils and a magnetic flux of changing magnitude and “polarity,” transformers are necessarily AC devices. Its schematic symbol looks like two inductors (coils) sharing the same magnetic core: (Figure below) Schematic symbol for transformer consists of two inductor symbols, separated by lines indicating a ferromagnetic core. The two inductor coils are easily distinguished in the above symbol. The pair of vertical lines represent an iron core common to both inductors. While many transformers have ferromagnetic core materials, there are some that do not, their constituent inductors being magnetically linked together through the air. The following photograph shows a power transformer of the type used in gas-discharge lighting. Here, the two inductor coils can be clearly seen, wound around an iron core. While most transformer designs enclose the coils and core in a metal frame for protection, this particular transformer is open for viewing and so serves its illustrative purpose well: (Figure below) Example of a gas-discharge lighting transformer. Primary and Secondary Windings Both coils of wire can be seen here with copper-colored varnish insulation. The top coil is larger than the bottom coil, having a greater number of “turns” around the core. In transformers, the inductor coils are often referred to as windings, in reference to the manufacturing process where wire is wound around the core material. As modeled in our initial example, the powered inductor of a transformer is called the primarywinding, while the unpowered coil is called the secondary winding. In the next photograph, Figure below, a transformer is shown cut in half, exposing the cross-section of the iron core as well as both windings. Like the transformer shown previously, this unit also utilizes primary and secondary windings of differing turn counts. The wire gauge can also be seen to differ between primary and secondary windings. The reason for this disparity in wire gauge will be made clear in the next section of this chapter. Additionally, the iron core can be seen in this photograph to be made of many thin sheets (laminations) rather than a solid piece. The reason for this will also be explained in a later section of this chapter. Transformer cross-section cut shows core and windings. Simple Transformer Action Using SPICE It is easy to demonstrate simple transformer action using SPICE, setting up the primary and secondary windings of the simulated transformer as a pair of “mutual” inductors. (Figure below) The coefficient of magnetic field coupling is given at the end of the “k” line in the SPICE circuit description, this example being set very nearly at perfection (1.000). This coefficient describes how closely “linked” the two inductors are, magnetically. The better these two inductors are magnetically coupled, the more efficient the energy transfer between them should be. Spice circuit for coupled inductors. Note: the Rbogus resistors are required to satisfy certain quirks of SPICE. The first breaks the otherwise continuous loop between the voltage source and L1 which would not be permitted by SPICE. The second provides a path to ground (node 0) from the secondary circuit, necessary because SPICE cannot function with any ungrounded circuits. Note that with equal inductances for both windings (100 Henrys each), the AC voltages and currents are nearly equal for the two. The difference between primary and secondary currents is the magnetizing current spoken of earlier: the 90o lagging current necessary to magnetize the core. As is seen here, it is usually very small compared to primary current induced by the load, and so the primary and secondary currents are almost equal. What you are seeing here is quite typical of transformer efficiency. Anything less than 95% efficiency is considered poor for modern power transformer designs, and this transfer of power occurs with no moving parts or other components subject to wear. If we decrease the load resistance so as to draw more current with the same amount of voltage, we see that the current through the primary winding increases in response. Even though the AC power source is not directly connected to the load resistance (rather, it is electromagnetically “coupled”), the amount of current drawn from the source will be almost the same as the amount of current that would be drawn if the load were directly connected to the source. Take a close look at the next two SPICE simulations, showing what happens with different values of load resistors: Notice how the primary current closely follows the secondary current. In our first simulation, both currents were approximately 10 mA, but now they are both around 47 mA. In this second simulation, the two currents are closer to equality, because the magnetizing current remains the same as before while the load current has increased. Note also how the secondary voltage has decreased some with the heavier (greater current) load. Let’s try another simulation with an even lower value of load resistance (15 Ω): Our load current is now 0.13 amps, or 130 mA, which is substantially higher than the last time. The primary current is very close to being the same, but notice how the secondary voltage has fallen well below the primary voltage (1.95 volts versus 10 volts at the primary). The reason for this is an imperfection in our transformer design: because the primary and secondary inductances aren’t perfectly linked (a k factor of 0.999 instead of 1.000) there is “stray” or “leakage” inductance. In other words, some of the magnetic field isn’t linking with the secondary coil, and thus cannot couple energy to it: (Figure below) Leakage inductance is due to magnetic flux not cutting both windings. Consequently, this “leakage” flux merely stores and returns energy to the source circuit via self-inductance, effectively acting as a series impedance in both primary and secondary circuits. Voltage gets dropped across this series impedance, resulting in a reduced load voltage: voltage across the load “sags” as load current increases. (Figure below) Equivalent circuit models leakage inductance as series inductors independent of the “ideal transformer”. If we change the transformer design to have better magnetic coupling between the primary and secondary coils, the figures for voltage between primary and secondary windings will be much closer to equality again: Here we see that our secondary voltage is back to being equal with the primary, and the secondary current is equal to the primary current as well. Unfortunately, building a real transformer with coupling this complete is very difficult. A compromise solution is to design both primary and secondary coils with less inductance, the strategy being that less inductance overall leads to less “leakage” inductance to cause trouble, for any given degree of magnetic coupling inefficiency. This results in a load voltage that is closer to ideal with the same (high current heavy) load and the same coupling factor: Simply by using primary and secondary coils of less inductance, the load voltage for this heavy load (high current) has been brought back up to nearly ideal levels (9.977 volts). At this point, one might ask, “If less inductance is all that’s needed to achieve near-ideal performance under heavy load, then why worry about coupling efficiency at all? If its impossible to build a transformer with perfect coupling, but easy to design coils with low inductance, then why not just build all transformers with low-inductance coils and have excellent efficiency even with poor magnetic coupling?” The answer to this question is found in another simulation: the same low-inductance transformer, but this time with a lighter load (less current) of 1 kΩ instead of 15 Ω:. With lower winding inductances, the primary and secondary voltages are closer to being equal, but the primary and secondary currents are not. In this particular case, the primary current is 28.35 mA while the secondary current is only 9.990 mA: almost three times as much current in the primary as the secondary. Why is this? With less inductance in the primary winding, there is less inductive reactance, and consequently a much larger magnetizing current. A substantial amount of the current through the primary winding merely works to magnetize the core rather than transfer useful energy to the secondary winding and load. An ideal transformer with identical primary and secondary windings would manifest equal voltage and current in both sets of windings for any load condition. In a perfect world, transformers would transfer electrical power from primary to secondary as smoothly as though the load were directly connected to the primary power source, with no transformer there at all. However, you can see this ideal goal can only be met if there is perfect coupling of magnetic flux between primary and secondary windings. Being that this is impossible to achieve, transformers must be designed to operate within certain expected ranges of voltages and loads in order to perform as close to ideal as possible. For now, the most important thing to keep in mind is a transformer’s basic operating principle: the transfer of power from the primary to the secondary circuit via electromagnetic coupling. Review • Mutual inductance is where the magnetic flux of two or more inductors are “linked” so that voltage is induced in one coil proportional to the rate-of-change of current in another. • A transformer is a device made of two or more inductors, one of which is powered by AC, inducing an AC voltage across the second inductor. If the second inductor is connected to a load, power will be electromagnetically coupled from the first inductor’s power source to that load. • The powered inductor in a transformer is called the primary winding. The unpowered inductor in a transformer is called the secondary winding. • Magnetic flux in the core (Φ) lags 90o behind the source voltage waveform. The current drawn by the primary coil from the source to produce this flux is called the magnetizing current, and it also lags the supply voltage by 90o. • Total primary current in an unloaded transformer is called the exciting current, and is comprised of magnetizing current plus any additional current necessary to overcome core losses. It is never perfectly sinusoidal in a real transformer, but may be made more so if the transformer is designed and operated so that magnetic flux density is kept to a minimum. • Core flux induces a voltage in any coil wrapped around the core. The induces voltage(s) are ideally in- phase with the primary winding source voltage and share the same waveshape. • •Any current drawn through the secondary winding by a load will be “reflected” to the primary winding and drawn from the voltage source, as if the source were directly powering a similar load.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/10%3A_Transformers/10.01%3A_Mutual_Inductance_and_Basic_Operation.txt
So far, we’ve observed simulations of transformers where the primary and secondary windings were of identical inductance, giving approximately equal voltage and current levels in both circuits. Equality of voltage and current between the primary and secondary sides of a transformer, however, is not the norm for all transformers. If the inductances of the two windings are not equal, something interesting happens: Notice how the secondary voltage is approximately ten times less than the primary voltage (0.9962 volts compared to 10 volts), while the secondary current is approximately ten times greater (0.9962 mA compared to 0.09975 mA). What we have here is a device that steps voltage down by a factor of ten and current up by a factor of ten: (Figure below) Turns ratio of 10:1 yields 10:1 primary: secondary voltage ratio and 1:10 primary: secondary current ratio. This is a very useful device, indeed. With it, we can easily multiply or divide voltage and current in AC circuits. Indeed, the transformer has made long-distance transmission of electric power a practical reality, as AC voltage can be “stepped up” and current “stepped down” for reduced wire resistance power losses along power lines connecting generating stations with loads. At either end (both the generator and at the loads), voltage levels are reduced by transformers for safer operation and less expensive equipment. A transformer that increases voltage from primary to secondary (more secondary winding turns than primary winding turns) is called a step-up transformer. Conversely, a transformer designed to do just the opposite is called a step-down transformer. Let’s re-examine a photograph shown in the previous section: (Figure below) Transformer cross-section showing primary and secondary windings is a few inches tall (approximately 10 cm). This is a step-down transformer, as evidenced by the high turn count of the primary winding and the low turn count of the secondary. As a step-down unit, this transformer converts high-voltage, low-current power into low-voltage, high-current power. The larger-gauge wire used in the secondary winding is necessary due to the increase in current. The primary winding, which doesn’t have to conduct as much current, may be made of smaller-gauge wire. In case you were wondering, it is possible to operate either of these transformer types backwards (powering the secondary winding with an AC source and letting the primary winding power a load) to perform the opposite function: a step-up can function as a step-down and visa-versa. However, as we saw in the first section of this chapter, efficient operation of a transformer requires that the individual winding inductances be engineered for specific operating ranges of voltage and current, so if a transformer is to be used “backwards” like this it must be employed within the original design parameters of voltage and current for each winding, lest it prove to be inefficient (or lest it be damaged by excessive voltage or current!). Transformers are often constructed in such a way that it is not obvious which wires lead to the primary winding and which lead to the secondary. One convention used in the electric power industry to help alleviate confusion is the use of “H” designations for the higher-voltage winding (the primary winding in a step-down unit; the secondary winding in a step-up) and “X” designations for the lower-voltage winding. Therefore, a simple power transformer will have wires labeled “H1”, “H2”, “X1”, and “X2”. There is usually significant to the numbering of the wires (H1 versus H2, etc.), which we’ll explore a little later in this chapter. The fact that voltage and current get “stepped” in opposite directions (one up, the other down) makes perfect sense when you recall that power is equal to voltage times current, and realize that transformers cannot produce power, only convert it. Any device that could output more power than it took in would violate the Law of Energy Conservation in physics, namely that energy cannot be created or destroyed, only converted. As with the first transformer example we looked at, power transfer efficiency is very good from the primary to the secondary sides of the device. The practical significance of this is made more apparent when an alternative is considered: before the advent of efficient transformers, voltage/current level conversion could only be achieved through the use of motor/generator sets. A drawing of a motor/generator set reveals the basic principle involved: (Figure below) Motor generator illustrates the basic principle of the transformer. In such a machine, a motor is mechanically coupled to a generator, the generator designed to produce the desired levels of voltage and current at the rotating speed of the motor. While both motors and generators are fairly efficient devices, the use of both in this fashion compounds their inefficiencies so that the overall efficiency is in the range of 90% or less. Furthermore, because motor/generator sets obviously require moving parts, mechanical wear and balance are factors influencing both service life and performance. Transformers, on the other hand, are able to convert levels of AC voltage and current at very high efficiencies with no moving parts, making possible the widespread distribution and use of electric power we take for granted. In all fairness it should be noted that motor/generator sets have not necessarily been obsoleted by transformers for all applications. While transformers are clearly superior over motor/generator sets for AC voltage and current level conversion, they cannot convert one frequency of AC power to another, or (by themselves) convert DC to AC or visa-versa. Motor/generator sets can do all these things with relative simplicity, albeit with the limitations of efficiency and mechanical factors already described. Motor/generator sets also have the unique property of kinetic energy storage: that is, if the motor’s power supply is momentarily interrupted for any reason, its angular momentum (the inertia of that rotating mass) will maintain rotation of the generator for a short duration, thus isolating any loads powered by the generator from “glitches” in the main power system. Looking closely at the numbers in the SPICE analysis, we should see a correspondence between the transformer’s ratio and the two inductances. Notice how the primary inductor (l1) has 100 times more inductance than the secondary inductor (10000 H versus 100 H), and that the measured voltage step-down ratio was 10 to 1. The winding with more inductance will have higher voltage and less current than the other. Since the two inductors are wound around the same core material in the transformer (for the most efficient magnetic coupling between the two), the parameters affecting inductance for the two coils are equal except for the number of turns in each coil. If we take another look at our inductance formula, we see that inductance is proportional to the square of the number of coil turns: So, it should be apparent that our two inductors in the last SPICE transformer example circuit—with inductance ratios of 100:1—should have coil turn ratios of 10:1, because 10 squared equals 100. This works out to be the same ratio we found between primary and secondary voltages and currents (10:1), so we can say as a rule that the voltage and current transformation ratio is equal to the ratio of winding turns between primary and secondary. Step-down transformer: (many turns :few turns). The step-up/step-down effect of coil turn ratios in a transformer (Figure above) is analogous to gear tooth ratios in mechanical gear systems, transforming values of speed and torque in much the same way: (Figure below) Torque reducing gear train steps torque down, while stepping speed up. Step-up and step-down transformers for power distribution purposes can be gigantic in proportion to the power transformers previously shown, some units standing as tall as a home. The following photograph shows a substation transformer standing about twelve feet tall: (Figure below) Substation transformer. Review • Transformers “step up” or “step down” voltage according to the ratios of primary to secondary wire turns. • A transformer designed to increase voltage from primary to secondary is called a step-up transformer. A transformer designed to reduce voltage from primary to secondary is called a step-down transformer. • The transformation ratio of a transformer will be equal to the square root of its primary to secondary inductance (L) ratio.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/10%3A_Transformers/10.02%3A_Step-up_and_Step-down_Transformers.txt
Aside from the ability to easily convert between different levels of voltage and current in AC and DC circuits, transformers also provide an extremely useful feature called isolation, which is the ability to couple one circuit to another without the use of direct wire connections. We can demonstrate an application of this effect with another SPICE simulation: this time showing “ground” connections for the two circuits, imposing a high DC voltage between one circuit and ground through the use of an additional voltage source:(Figure below) Transformer isolates 10 Vac at V1 from 250 VDC at V2. SPICE shows the 250 volts DC being impressed upon the secondary circuit elements with respect to ground, (Figure above) but as you can see there is no effect on the primary circuit (zero DC voltage) at nodes 1 and 2, and the transformation of AC power from primary to secondary circuits remains the same as before. The impressed voltage in this example is often called a common-mode voltage because it is seen at more than one point in the circuit with reference to the common point of ground. The transformer isolates the common-mode voltage so that it is not impressed upon the primary circuit at all, but rather isolated to the secondary side. For the record, it does not matter that the common-mode voltage is DC, either. It could be AC, even at a different frequency, and the transformer would isolate it from the primary circuit all the same. There are applications where electrical isolation is needed between two AC circuit without any transformation of voltage or current levels. In these instances, Transformers called isolation transformers having 1:1 transformation ratios are used. A benchtop isolation transformer is shown in Figure below. Isolation transformer isolates power out from the power line. Review • By being able to transfer power from one circuit to another without the use of interconnecting conductors between the two circuits, transformers provide the useful feature of electrical isolation. • Transformers designed to provide electrical isolation without stepping voltage and current either up or down are called isolation transformers. 10.04: Phasing Since transformers are essentially AC devices, we need to be aware of the phase relationships between the primary and secondary circuits. Using our SPICE example from before, we can plot the waveshapes (Figure below) for the primary and secondary circuits and see the phase relations for ourselves: Secondary voltage V(3,5) is in-phase with primary voltage V(2), and stepped down by factor of ten. In going from primary, V(2), to secondary, V(3,5), the voltage was stepped down by a factor of ten, (Figure above) , and the current was stepped up by a factor of 10. (Figure below) Both current (Figure below) and voltage (Figure above) waveforms are in-phase in going from primary to secondary. Primary and secondary currents are in-phase. Secondary current is stepped up by a factor of ten. It would appear that both voltage and current for the two transformer windings are in-phase with each other, at least for our resistive load. This is simple enough, but it would be nice to know which way we should connect a transformer in order to ensure the proper phase relationships be kept. After all, a transformer is nothing more than a set of magnetically-linked inductors, and inductors don’t usually come with polarity markings of any kind. If we were to look at an unmarked transformer, we would have no way of knowing which way to hook it up to a circuit to get in-phase (or 180o out-of-phase) voltage and current: (Figure below) As a practical matter, the polarity of a transformer can be ambiguous. Since this is a practical concern, transformer manufacturers have come up with a sort of polarity marking standard to denote phase relationships. It is called the dot convention, and is nothing more than a dot placed next to each corresponding leg of a transformer winding: (Figure below) A pair of dots indicates like polarity. Typically, the transformer will come with some kind of schematic diagram labeling the wire leads for primary and secondary windings. On the diagram will be a pair of dots similar to what is seen above. Sometimes dots will be omitted, but when “H” and “X” labels are used to label transformer winding wires, the subscript numbers are supposed to represent winding polarity. The “1” wires (H1 and X1) represent where the polarity-marking dots would normally be placed. The similar placement of these dots next to the top ends of the primary and secondary windings tells us that whatever instantaneous voltage polarity seen across the primary winding will be the same as that across the secondary winding. In other words, the phase shift from primary to secondary will be zero degrees. On the other hand, if the dots on each winding of the transformer do not match up, the phase shift will be 180o between primary and secondary, like this: (Figure below) Out of phase: primary red to dot, secondary black to dot. Of course, the dot convention only tells you which end of each winding is which, relative to the other winding(s). If you want to reverse the phase relationship yourself, all you have to do is swap the winding connections like this: (Figure below) In phase: primary red to dot, secondary red to dot. Review • The phase relationships for voltage and current between primary and secondary circuits of a transformer are direct: ideally, zero phase shift. • The dot convention is a type of polarity marking for transformer windings showing which end of the winding is which, relative to the other windings.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/10%3A_Transformers/10.03%3A_Electrical_Isolation.txt
Transformers are very versatile devices. The basic concept of energy transfer between mutual inductors is useful enough between a single primary and single secondary coil, but transformers don’t have to be made with just two sets of windings. Consider this transformer circuit: (Figure below) Transformer with multiple secondaries provides multiple output voltages. Here, three inductor coils share a common magnetic core, magnetically “coupling” or “linking” them together. The relationship of winding turn ratios and voltage ratios seen with a single pair of mutual inductors still holds true here for multiple pairs of coils. It is entirely possible to assemble a transformer such as the one above (one primary winding, two secondary windings) in which one secondary winding is a step-down and the other is a step-up. In fact, this design of transformer was quite common in vacuum tube power supply circuits, which were required to supply low voltage for the tubes’ filaments (typically 6 or 12 volts) and high voltage for the tubes’ plates (several hundred volts) from a nominal primary voltage of 110 volts AC. Not only are voltages and currents of completely different magnitudes possible with such a transformer, but all circuits are electrically isolated from one another. Photograph of multiple-winding transformer with six windings, a primary and five secondaries. The transformer in Figure above is intended to provide both high and low voltages necessary in an electronic system using vacuum tubes. Low voltage is required to power the filaments of vacuum tubes, while high voltage is required to create the potential difference between the plate and cathode elements of each tube. One transformer with multiple windings suffices elegantly to provide all the necessary voltage levels from a single 115 V source. The wires for this transformer (15 of them!) are not shown in the photograph, being hidden from view. If electrical isolation between secondary circuits is not of great importance, a similar effect can be obtained by “tapping” a single secondary winding at multiple points along its length, like Figure below. A single tapped secondary provides multiple voltages. A tap is nothing more than a wire connection made at some point on a winding between the very ends. Not surprisingly, the winding turn/voltage magnitude relationship of a normal transformer holds true for all tapped segments of windings. This fact can be exploited to produce a transformer capable of multiple ratios: (Figure below) A tapped secondary using a switch to select one of many possible voltages Carrying the concept of winding taps further, we end up with a “variable transformer,” where a sliding contact is moved along the length of an exposed secondary winding, able to connect with it at any point along its length. The effect is equivalent to having a winding tap at every turn of the winding, and a switch with poles at every tap position: (Figure below) A sliding contact on the secondary continuously varies the secondary voltage. One consumer application of the variable transformer is in speed controls for model train sets, especially the train sets of the 1950’s and 1960’s. These transformers were essentially step-down units, the highest voltage obtainable from the secondary winding being substantially less than the primary voltage of 110 to 120 volts AC. The variable-sweep contact provided a simple means of voltage control with little-wasted power, much more efficient than control using a variable resistor! Moving-slide contacts are too impractical to be used in large industrial power transformer designs, but multi-pole switches and winding taps are common for voltage adjustment. Adjustments need to be made periodically in power systems to accommodate changes in loads over months or years in time, and these switching circuits provide a convenient means. Typically, such “tap switches” are not engineered to handle full-load current, but must be actuated only when the transformer has been de-energized (no power). Seeing as how we can tap any transformer winding to obtain the equivalent of several windings (albeit with loss of electrical isolation between them), it makes sense that it should be possible to forego electrical isolation altogether and build a transformer from a single winding. Indeed this is possible, and the resulting device is called an autotransformer: (Figure below) This autotransformer steps voltage up with a single tapped winding, saving copper, sacrificing isolation. The autotransformer depicted above performs a voltage step-up function. A step-down autotransformer would look something like Figure below. This auto transformer steps voltage down with a single copper-saving tapped winding. Autotransformers find popular use in applications requiring a slight boost or reduction in voltage to a load. The alternative with a normal (isolated) transformer would be to either have just the right primary/secondary winding ratio made for the job or use a step-down configuration with the secondary winding connected in series-aiding (“boosting”) or series-opposing (“bucking”) fashion. Primary, secondary, and load voltages are given to illustrate how this would work. First, the “boosting” configuration. In Figure below the secondary coil’s polarity is oriented so that its voltage directly adds to the primary voltage. Ordinary transformer wired as an autotransformer to boost the line voltage. Next, the “bucking” configuration. In Figure below the secondary coil’s polarity is oriented so that its voltage directly subtracts from the primary voltage: Ordinary transformer wired as an autotransformer to buck the line voltage down. The prime advantage of an autotransformer is that the same boosting or bucking function is obtained with only a single winding, making it cheaper and lighter to manufacture than a regular (isolating) transformer having both primary and secondary windings. Like regular transformers, autotransformer windings can be tapped to provide variations in ratio. Additionally, they can be made continuously variable with a sliding contact to tap the winding at any point along its length. The latter configuration is popular enough to have earned itself its own name: the Variac. (Figure below) A variac is an autotransformer with a sliding tap. Small variacs for benchtop use are popular pieces of equipment for the electronics experimenter, being able to step household AC voltage down (or sometimes up as well) with a wide, fine range of control by a simple twist of a knob. Review • Transformers can be equipped with more than just a single primary and single secondary winding pair. This allows for multiple step-up and/or step-down ratios in the same device. • Transformer windings can also be “tapped:” that is, intersected at many points to segment a single winding into sections. • Variable transformers can be made by providing a movable arm that sweeps across the length of a winding, making contact with the winding at any point along its length. The winding, of course, has to be bare (no insulation) in the area where the arm sweeps. • An autotransformer is a single, tapped inductor coil used to step up or step down voltage like a transformer, except without providing electrical isolation. • A Variac is a variable autotransformer.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/10%3A_Transformers/10.05%3A_Winding_Configurations.txt
Voltage Regulation Formula The measure of how well a power transformer maintains constant secondary voltage over a range of load currents is called the transformer’s voltage regulation. It can be calculated from the following formula: What is “Full Load”? “Full-load” means the point at which the transformer is operating at maximum permissible secondary current. This operating point will be determined primarily by the winding wire size (ampacity) and the method of transformer cooling. Taking our first SPICE transformer simulation as an example, let’s compare the output voltage with a 1 kΩ load versus a 200 Ω load (assuming that the 200 Ω load will be our “full load” condition). Recall if you will that our constant primary voltage was 10.00 volts AC: Notice how the output voltage decreases as the load gets heavier (more current). Now let’s take that same transformer circuit and place a load resistance of extremely high magnitude across the secondary winding to simulate a “no-load” condition: (See “transformer” spice list”) So, we see that our output (secondary) voltage spans a range of 9.990 volts at (virtually) no load and 9.348 volts at the point we decided to call “full load.” Calculating voltage regulation with these figures, we get: Incidentally, this would be considered rather poor (or “loose”) regulation for a power transformer. Powering a simple resistive load like this, a good power transformer should exhibit a regulation percentage of less than 3%. Inductive loads tend to create a condition of worse voltage regulation, so this analysis with purely resistive loads was a “best-case” condition. There are some applications, however, where poor regulation is actually desired. One such case is in discharge lighting, where a step-up transformer is required to initially generate a high voltage (necessary to “ignite” the lamps), then the voltage is expected to drop off once the lamp begins to draw current. This is because discharge lamps’ voltage requirements tend to be much lower after a current has been established through the arc path. In this case, a step-up transformer with poor voltage regulation suffices nicely for the task of conditioning power to the lamp. Another application is in current control for AC arc welders, which are nothing more than step-down transformers supplying low-voltage, high-current power for the welding process. A high voltage is desired to assist in “striking” the arc (getting it started), but like the discharge lamp, an arc doesn’t require as much voltage to sustain itself once the air has been heated to the point of ionization. Thus, a decrease of secondary voltage under high load current would be a good thing. Some arc welder designs provide arc current adjustment by means of a movable iron core in the transformer, cranked in or out of the winding assembly by the operator. Moving the iron slug away from the windings reduces the strength of magnetic coupling between the windings, which diminishes no-load secondary voltage and makes for poorer voltage regulation. No exposition on transformer regulation could be called complete without mention of an unusual device called a ferroresonant transformer. “Ferroresonance” is a phenomenon associated with the behavior of iron cores while operating near a point of magnetic saturation (where the core is so strongly magnetized that further increases in winding current results in little or no increase in magnetic flux). While being somewhat difficult to describe without going deep into electromagnetic theory, the ferroresonant transformer is a power transformer engineered to operate in a condition of persistent core saturation. That is, its iron core is “stuffed full” of magnetic lines of flux for a large portion of the AC cycle so that variations in supply voltage (primary winding current) have little effect on the core’s magnetic flux density, which means the secondary winding outputs a nearly constant voltage despite significant variations in supply (primary winding) voltage. Normally, core saturation in a transformer results in distortion of the sinewave shape, and the ferroresonant transformer is no exception. To combat this side effect, ferroresonant transformers have an auxiliary secondary winding paralleled with one or more capacitors, forming a resonant circuit tuned to the power supply frequency. This “tank circuit” serves as a filter to reject harmonics created by the core saturation, and provides the added benefit of storing energy in the form of AC oscillations, which is available for sustaining output winding voltage for brief periods of input voltage loss (milliseconds’ worth of time, but certainly better than nothing). (Figure below) Ferroresonant transformer provides voltage regulation of the output. In addition to blocking harmonics created by the saturated core, this resonant circuit also “filters out” harmonic frequencies generated by nonlinear (switching) loads in the secondary winding circuit and any harmonics present in the source voltage, providing “clean” power to the load. Ferroresonant transformers offer several features useful in AC power conditioning: constant output voltage given substantial variations in input voltage, harmonic filtering between the power source and the load, and the ability to “ride through” brief losses in power by keeping a reserve of energy in its resonant tank circuit. These transformers are also highly tolerant of excessive loading and transient (momentary) voltage surges. They are so tolerant, in fact, that some may be briefly paralleled with unsynchronized AC power sources, allowing a load to be switched from one source of power to another in a “make-before-break” fashion with no interruption of power on the secondary side! Known Disadvantages of Ferroresonant Transformers Unfortunately, these devices have equally noteworthy disadvantages: they waste a lot of energy (due to hysteresis losses in the saturated core), generating significant heat in the process, and are intolerant of frequency variations, which means they don’t work very well when powered by small engine-driven generators having poor speed regulation. Voltages produced in the resonant winding/capacitor circuit tend to be very high, necessitating expensive capacitors and presenting the service technician with very dangerous working voltages. Some applications, though, may prioritize the ferroresonant transformer’s advantages over its disadvantages. Semiconductor circuits exist to “condition” AC power as an alternative to ferroresonant devices, but none can compete with this transformer in terms of sheer simplicity. Review • Voltage regulation is the measure of how well a power transformer can maintain constant secondary voltage given a constant primary voltage and wide variance in load current. The lower the percentage (closer to zero), the more stable the secondary voltage and the better the regulation it will provide. • A ferroresonant transformer is a special transformer designed to regulate voltage at a stable level despite wide variation in input voltage.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/10%3A_Transformers/10.06%3A_Voltage_Regulation.txt
Impedance matching Because transformers can step voltage and current to different levels, and because power is transferred equivalently between primary and secondary windings, they can be used to “convert” the impedance of a load to a different level. That last phrase deserves some explanation, so let’s investigate what it means. The purpose of a load (usually) is to do something productive with the power it dissipates. In the case of a resistive heating element, the practical purpose for the power dissipated is to heat something up. Loads are engineered to safely dissipate a certain maximum amount of power, but two loads of equal power rating are not necessarily identical. Consider these two 1000 watt resistive heating elements: (Figure below) Heating elements dissipate 1000 watts, at different voltage and current ratings. Both heaters dissipate exactly 1000 watts of power, but they do so at different voltage and current levels (either 250 volts and 4 amps, or 125 volts and 8 amps). Using Ohm’s Law to determine the necessary resistance of these heating elements (R=E/I), we arrive at figures of 62.5 Ω and 15.625 Ω, respectively. If these are AC loads, we might refer to their opposition to current in terms of impedance rather than plain resistance, although in this case that’s all they’re composed of (no reactance). The 250 volt heater would be said to be a higher impedance load than the 125 volt heater. If we desired to operate the 250 volt heater element directly on a 125 volt power system, we would end up being disappointed. With 62.5 Ω of impedance (resistance), the current would only be 2 amps (I=E/R; 125/62.5), and the power dissipation would only be 250 watts (P=IE; 125 x 2), or one-quarter of its rated power. The impedance of the heater and the voltage of our source would be mismatched, and we couldn’t obtain the full rated power dissipation from the heater. All hope is not lost, though. With a step-up transformer, we could operate the 250 volt heater element on the 125 volt power system like Figure below. Step-up transformer operates 1000 watt 250 V heater from 125 V power source The ratio of the transformer’s windings provides the voltage step-up and current step-down we need for the otherwise mismatched load to operate properly on this system. Take a close look at the primary circuit figures: 125 volts at 8 amps. As far as the power supply “knows,” its powering a 15.625 Ω (R=E/I) load at 125 volts, not a 62.5 Ω load! The voltage and current figures for the primary winding are indicative of 15.625 Ω load impedance, not the actual 62.5 Ω of the load itself. In other words, not only has our step-up transformer transformed voltage and current, but it has transformed impedance as well. The transformation ratio of impedance is the square of the voltage/current transformation ratio, the same as the winding inductance ratio: This concurs with our example of the 2:1 step-up transformer and the impedance ratio of 62.5 Ω to 15.625 Ω (a 4:1 ratio, which is 2:1 squared). Impedance transformation is a highly useful ability of transformers, for it allows a load to dissipate its full rated power even if the power system is not at the proper voltage to directly do so. Recall from our study of network analysis the Maximum Power Transfer Theorem, which states that the maximum amount of power will be dissipated by a load resistance when that load resistance is equal to the Thevenin/Norton resistance of the network supplying the power. Substitute the word “impedance” for “resistance” in that definition and you have the AC version of that Theorem. If we’re trying to obtain theoretical maximum power dissipation from a load, we must be able to properly match the load impedance and source (Thevenin/Norton) impedance together. This is generally more of a concern in specialized electric circuits such as radio transmitter/antenna and audio amplifier/speaker systems. Let’s take an audio amplifier system and see how it works: (Figure below) Amplifier with impedance of 500 Ω drives 8 Ω at much less than maximum power. With an internal impedance of 500 Ω, the amplifier can only deliver full power to a load (speaker) also having 500 Ω of impedance. Such a load would drop higher voltage and draw less current than an 8 Ω speaker dissipating the same amount of power. If an 8 Ω speaker were connected directly to the 500 Ω amplifier as shown, the impedance mismatch would result in very poor (low peak power) performance. Additionally, the amplifier would tend to dissipate more than its fair share of power in the form of heat trying to drive the low impedance speaker. To make this system work better, we can use a transformer to match these mismatched impedances. Since we’re going from a high impedance (high voltage, low current) supply to a low impedance (low voltage, high current) load, we’ll need to use a step-down transformer: (Figure below) Impedance matching transformer matches 500 Ω amplifier to 8 Ω speaker for maximum efficiency. To obtain an impedance transformation ratio of 500:8, we would need a winding ratio equal to the square root of 500:8 (the square root of 62.5:1, or 7.906:1). With such a transformer in place, the speaker will load the amplifier to just the right degree, drawing power at the correct voltage and current levels to satisfy the Maximum Power Transfer Theorem and make for the most efficient power delivery to the load. The use of a transformer in this capacity is called impedance matching. Anyone who has ridden a multi-speed bicycle can intuitively understand the principle of impedance matching. A human’s legs will produce maximum power when spinning the bicycle crank at a particular speed (about 60 to 90 revolution per minute). Above or below that rotational speed, human leg muscles are less efficient at generating power. The purpose of the bicycle’s “gears” is to impedance-match the rider’s legs to the riding conditions so that they always spin the crank at the optimum speed. If the rider attempts to start moving while the bicycle is shifted into its “top” gear, he or she will find it very difficult to get moving. Is it because the rider is weak? No, its because the high step-up ratio of the bicycle’s chain and sprockets in that top gear presents a mismatch between the conditions (lots of inertia to overcome) and their legs (needing to spin at 60-90 RPM for maximum power output). On the other hand, selecting a gear that is too low will enable the rider to get moving immediately, but limit the top speed they will be able to attain. Again, is the lack of speed an indication of weakness in the bicyclist’s legs? No, its because the lower speed ratio of the selected gear creates another type of mismatch between the conditions (low load) and the rider’s legs (losing power if spinning faster than 90 RPM). It is much the same with electric power sources and loads: there must be an impedance match for maximum system efficiency. In AC circuits, transformers perform the same matching function as the sprockets and chain (“gears”) on a bicycle to match otherwise mismatched sources and loads. Impedance matching transformers are not fundamentally different from any other type of transformer in construction or appearance. A small impedance-matching transformer (about two centimeters in width) for audio-frequency applications is shown in the following photograph: (Figure below) Audio frequency impedance matching transformer. Another impedance-matching transformer can be seen on this printed circuit board, in the upper right corner, to the immediate left of resistors R2 and R1. It is labeled “T1”: (Figure below) Printed circuit board mounted audio impedance matching transformer, top right. Potential transformers Transformers can also be used in electrical instrumentation systems. Due to transformers’ ability to step up or step down voltage and current, and the electrical isolation they provide, they can serve as a way of connecting electrical instrumentation to high-voltage, high current power systems. Suppose we wanted to accurately measure the voltage of a 13.8 kV power system (a very common power distribution voltage in American industry): (Figure below) Direct measurement of high voltage by a voltmeter is a potential safety hazard. Designing, installing, and maintaining a voltmeter capable of directly measuring 13,800 volts AC would be no easy task. The safety hazard alone of bringing 13.8 kV conductors into an instrument panel would be severe, not to mention the design of the voltmeter itself. However, by using a precision step-down transformer, we can reduce the 13.8 kV down to a safe level of voltage at a constant ratio, and isolate it from the instrument connections, adding an additional level of safety to the metering system: (Figure below) Instrumentation application:“Potential transformer” precisely scales dangerous high voltage to a safe value applicable to a conventional voltmeter. Now the voltmeter reads a precise fraction, or ratio, of the actual system voltage, its scale set to read as though it were measuring the voltage directly. The transformer keeps the instrument voltage at a safe level and electrically isolates it from the power system, so there is no direct connection between the power lines and the instrument or instrument wiring. When used in this capacity, the transformer is called a Potential Transformer, or simply PT. Potential transformers are designed to provide as accurate a voltage step-down ratio as possible. To aid in precise voltage regulation, loading is kept to a minimum: the voltmeter is made to have high input impedance so as to draw as little current from the PT as possible. As you can see, a fuse has been connected in series with the PTs primary winding, for safety and ease of disconnecting the PT from the circuit. A standard secondary voltage for a PT is 120 volts AC, for full-rated power line voltage. The standard voltmeter range to accompany a PT is 150 volts, full-scale. PTs with custom winding ratios can be manufactured to suit any application. This lends itself well to industry standardization of the actual voltmeter instruments themselves, since the PT will be sized to step the system voltage down to this standard instrument level. Current transformers Following the same line of thinking, we can use a transformer to step down current through a power line so that we are able to safely and easily measure high system currents with inexpensive ammeters. Of course, such a transformer would be connected in series with the power line, like (Figure below). Instrumentation application: “Currrent transformer” steps high current down to a value applicable to a conventional ammeter. Note that while the PT is a step-down device, the Current Transformer (or CT) is a step-up device (with respect to voltage), which is what is needed to step down the power line current. Quite often, CTs are built as donut-shaped devices through which the power line conductor is run, the power line itself acting as a single-turn primary winding: (Figure below) Current conductor to be measured is threaded through the opening. Scaled down current is available on wire leads. Some CTs are made to hinge open, allowing insertion around a power conductor without disturbing the conductor at all. The industry standard secondary current for a CT is a range of 0 to 5 amps AC. Like PTs, CTs can be made with custom winding ratios to fit almost any application. Because their “full load” secondary current is 5 amps, CT ratios are usually described in terms of full-load primary amps to 5 amps, like this: The “donut” CT shown in the photograph has a ratio of 50:5. That is, when the conductor through the center of the torus is carrying 50 amps of current (AC), there will be 5 amps of current induced in the CT’s winding. Because CTs are designed to be powering ammeters, which are low-impedance loads, and they are wound as voltage step-up transformers, they should never, ever be operated with an open-circuited secondary winding. Failure to heed this warning will result in the CT producing extremely high secondary voltages, dangerous to equipment and personnel alike. To facilitate maintenance of ammeter instrumentation, short-circuiting switches are often installed in parallel with the CT’s secondary winding, to be closed whenever the ammeter is removed for service: (Figure below) Short-circuit switch allows ammeter to be removed from an active current transformer circuit. Though it may seem strange to intentionally short-circuit a power system component, it is perfectly proper and quite necessary when working with current transformers. Air core transformers Another kind of special transformer, seen often in radio-frequency circuits, is the air core transformer. (Figure below) True to its name, an air core transformer has its windings wrapped around a nonmagnetic form, usually a hollow tube of some material. The degree of coupling (mutual inductance) between windings in such a transformer is many times less than that of an equivalent iron-core transformer, but the undesirable characteristics of a ferromagnetic core (eddy current losses, hysteresis, saturation, etc.) are completely eliminated. It is in high-frequency applications that these effects of iron cores are most problematic. Air core transformers may be wound on cylindrical (a) or toroidal (b) forms. Center tapped primary with secondary (a). Bifilar winding on toroidal form (b). The inside tapped solenoid winding, (Figure (a) above), without the over winding, could match unequal impedances when DC isolation is not required. When isolation is required the over winding is added over one end of the main winding. Air core transformers are used at radio frequencies when iron core losses are too high. Frequently air core transformers are paralleled with a capacitor to tune it to resonance. The over winding is connected between a radio antenna and ground for one such application. The secondary is tuned to resonance with a variable capacitor. The output may be taken from the tap point for amplification or detection. Small millimeter size air core transformers are used in radio receivers. The largest radio transmitters may use meter sized coils. Unshielded air core solenoid transformers are mounted at right angles to each other to prevent stray coupling. Stray coupling is minimized when the transformer is wound on a toroid form. (Figure (b) above) Toroidal air core transformers also show a higher degree of coupling, particularly for bifilar windings. Bifilar windings are wound from a slightly twisted pair of wires. This implies a 1:1 turns ratio. Three or four wires may be grouped for 1:2 and other integral ratios. Windings do not have to be bifilar. This allows arbitrary turns ratios. However, the degree of coupling suffers. Toroidal air core transformers are rare except for VHF (Very High Frequency) work. Core materials other than air such as powdered iron or ferrite are preferred for lower radio frequencies. Tesla Coil One notable example of an air-core transformer is the Tesla Coil, named after the Serbian electrical genius Nikola Tesla, who was also the inventor of the rotating magnetic field AC motor, polyphase AC power systems, and many elements of radio technology. The Tesla Coil is a resonant, high-frequency step-up transformer used to produce extremely high voltages. One of Tesla’s dreams was to employ his coil technology to distribute electric power without the need for wires, simply broadcasting it in the form of radio waves which could be received and conducted to loads by means of antennas. The basic schematic for a Tesla Coil is shown in Figure below. Tesla Coil: A few heavy primary turns, many secondary turns. The capacitor, in conjunction with the transformer’s primary winding, forms a tank circuit. The secondary winding is wound in close proximity to the primary, usually around the same nonmagnetic form. Several options exist for “exciting” the primary circuit, the simplest being a high-voltage, low-frequency AC source and spark gap: (Figure below) System level diagram of Tesla coil with spark gap drive. The purpose of the high-voltage, low-frequency AC power source is to “charge” the primary tank circuit. When the spark gap fires, its low impedance acts to complete the capacitor/primary coil tank circuit, allowing it to oscillate at its resonant frequency. The “RFC” inductors are “Radio Frequency Chokes,” which act as high impedances to prevent the AC source from interfering with the oscillating tank circuit. The secondary side of the Tesla coil transformer is also a tank circuit, relying on the parasitic (stray) capacitance existing between the discharge terminal and earth ground to complement the secondary winding’s inductance. For optimum operation, this secondary tank circuit is tuned to the same resonant frequency as the primary circuit, with energy exchanged not only between capacitors and inductors during resonant oscillation, but also back-and-forth between primary and secondary windings. The visual results are spectacular: (Figure below) High voltage high frequency discharge from Tesla coil. Tesla Coils find application primarily as novelty devices, showing up in high school science fairs, basement workshops, and the occasional low budget science-fiction movie. It should be noted that Tesla coils can be extremely dangerous devices. Burns caused by radio-frequency (“RF”) current, like all electrical burns, can be very deep, unlike skin burns caused by contact with hot objects or flames. Although the high-frequency discharge of a Tesla coil has the curious property of being beyond the “shock perception” frequency of the human nervous system, this does not mean Tesla coils cannot hurt or even kill you! I strongly advise seeking the assistance of an experienced Tesla coil experimenter if you would embark on building one yourself. Saturable reactors So far, we’ve explored the transformer as a device for converting different levels of voltage, current, and even impedance from one circuit to another. Now we’ll take a look at it as a completely different kind of device: one that allows a small electrical signal to exert control over a much larger quantity of electrical power. In this mode, a transformer acts as an amplifier. The device I’m referring to is called a saturable-core reactor, or simply saturable reactor. Actually, it is not really a transformer at all, but rather a special kind of inductor whose inductance can be varied by the application of a DC current through a second winding wound around the same iron core. Like the ferroresonant transformer, the saturable reactor relies on the principle of magnetic saturation. When a material such as iron is completely saturated (that is, all its magnetic domains are lined up with the applied magnetizing force), additional increases in current through the magnetizing winding will not result in further increases of magnetic flux. Now, inductance is the measure of how well an inductor opposes changes in current by developing a voltage in an opposing direction. The ability of an inductor to generate this opposing voltage is directly connected with the change in magnetic flux inside the inductor resulting from the change in current, and the number of winding turns in the inductor. If an inductor has a saturated core, no further magnetic flux will result from further increases in current, and so there will be no voltage induced in opposition to the change in current. In other words, an inductor loses its inductance (ability to oppose changes in current) when its core becomes magnetically saturated. If an inductor’s inductance changes, then its reactance (and impedance) to AC current changes as well. In a circuit with a constant voltage source, this will result in a change in current: (Figure below) If L changes in inductance, ZL will correspondingly change, thus changing the circuit current. A saturable reactor capitalizes on this effect by forcing the core into a state of saturation with a strong magnetic field generated by current through another winding. The reactor’s “power” winding is the one carrying the AC load current, and the “control” winding is one carrying a DC current strong enough to drive the core into saturation: (Figure below) DC, via the control winding, saturates the core. Thus, modulating the power winding inductance, impedance, and current. The strange-looking transformer symbol shown in the above schematic represents a saturable-core reactor, the upper winding being the DC control winding and the lower being the “power” winding through which the controlled AC current goes. Increased DC control current produces more magnetic flux in the reactor core, driving it closer to a condition of saturation, thus decreasing the power winding’s inductance, decreasing its impedance, and increasing current to the load. Thus, the DC control current is able to exert control over the AC current delivered to the load. The circuit shown would work, but it would not work very well. The first problem is the natural transformer action of the saturable reactor: AC current through the power winding will induce a voltage in the control winding, which may cause trouble for the DC power source. Also, saturable reactors tend to regulate AC power only in one direction: in one half of the AC cycle, the mmf’s from both windings add; in the other half, they subtract. Thus, the core will have more flux in it during one half of the AC cycle than the other, and will saturate first in that cycle half, passing load current more easily in one direction than the other. Fortunately, both problems can be overcome with a little ingenuity: (Figure below) Out of phase DC control windings allow symmetrical of control AC. Notice the placement of the phasing dots on the two reactors: the power windings are “in phase” while the control windings are “out of phase.” If both reactors are identical, any voltage induced in the control windings by load current through the power windings will cancel out to zero at the battery terminals, thus eliminating the first problem mentioned. Furthermore, since the DC control current through both reactors produces magnetic fluxes in different directions through the reactor cores, one reactor will saturate more in one cycle of the AC power while the other reactor will saturate more in the other, thus equalizing the control action through each half-cycle so that the AC power is “throttled” symmetrically. This phasing of control windings can be accomplished with two separate reactors as shown, or in a single reactor design with intelligent layout of the windings and core. Saturable reactor technology has even been miniaturized to the circuit-board level in compact packages more generally known as magnetic amplifiers. I personally find this to be fascinating: the effect of amplification (one electrical signal controlling another), normally requiring the use of physically fragile vacuum tubes or electrically “fragile” semiconductor devices, can be realized in a device both physically and electrically rugged. Magnetic amplifiers do have disadvantages over their more fragile counterparts, namely size, weight, nonlinearity, and bandwidth (frequency response), but their utter simplicity still commands a certain degree of appreciation, if not practical application. Saturable-core reactors are less commonly known as “saturable-core inductors” or transductors. Scott-T transformer Nikola Tesla’s original polyphase power system was based on simple to build 2-phase components. However, as transmission distances increased, the more transmission line efficient 3-phase system became more prominent. Both 2-φ and 3-φ components coexisted for a number of years. The Scott-T transformer connection allowed 2-φ and 3-φ components like motors and alternators to be interconnected. Yamamoto and Yamaguchi: In 1896, General Electric built a 35.5 km (22 mi) three-phase transmission line operated at 11 kV to transmit power to Buffalo, New York, from the Niagara Falls Project. The two-phase generated power was changed to three-phase by the use of Scott-T transformations. [MYA] Scott-T transformer converts 2-φ to 3-φ, or vice versa. The Scott-T transformer set, Figure above, consists of a center tapped transformer T1 and an 86.6% tapped transformer T2 on the 3-φ side of the circuit. The primaries of both transformers are connected to the 2-φ voltages. One end of the T2 86.6% secondary winding is a 3-φ output, the other end is connected to the T1 secondary center tap. Both ends of the T1 secondary are the other two 3-φ connections. Application of 2-φ Niagara generator power produced a 3-φ output for the more efficient 3-φ transmission line. More common these days is the application of 3-φ power to produce a 2-φ output for driving an old 2-φ motor. In Figure below, we use vectors in both polar and complex notation to prove that the Scott-T converts a pair of 2-φ voltages to 3-φ. First, one of the 3-φ voltages is identical to a 2-φ voltage due to the 1:1 transformer T1 ratio, VP12= V2P1. The T1 center tapped secondary produces opposite polarities of 0.5V2P1 on the secondary ends. This ∠0o is vectorially subtracted from T2 secondary voltage due to the KVL equations V31, V23. The T2 secondary voltage is 0.866V2P2 due to the 86.6% tap. Keep in mind that this 2nd phase of the 2-φ is ∠90o. This 0.866V2P2 is added at V31, subtracted at V23 in the KVL equations. Scott-T transformer 2-φ to 3-φ conversion equations. We show “DC” polarities all over this AC only circuit, to keep track of the Kirchhoff voltage loop polarities. Subtracting ∠0o is equivalent to adding ∠180o. The bottom line is when we add 86.6% of ∠90o to 50% of ∠180o we get ∠120o. Subtracting 86.6% of ∠90o from 50% of ∠180o yields ∠-120o or ∠240o. Graphical explanation of equations in Figure previous. In Figure above we graphically show the 2-φ vectors at (a). At (b) the vectors are scaled by transformers T1 and T2 to 0.5 and 0.866 respectively. At (c) 1∠120o = -0.5∠0o + 0.866∠90o, and 1∠240o = -0.5∠0o - 0.866∠90o. The three output phases are 1∠120o and 1∠240o from (c), along with input 1∠0o (a). Linear Variable Differential Transformer A linear variable differential transformer (LVDT) has an AC driven primary wound between two secondaries on a cylindrical air core form. (Figure below) A movable ferromagnetic slug converts displacement to a variable voltage by changing the coupling between the driven primary and secondary windings. The LVDT is a displacement or distance measuring transducer. Units are available for measuring displacement over a distance of a fraction of a millimeter to a half a meter. LVDT’s are rugged and dirt resistant compared to linear optical encoders. LVDT: linear variable differential transformer. The excitation voltage is in the range of 0.5 to 10 VAC at a frequency of 1 to 200 Khz. A ferrite core is suitable at these frequencies. It is extended outside the body by an non-magnetic rod. As the core is moved toward the top winding, the voltage across this coil increases due to increased coupling, while the voltage on the bottom coil decreases. If the core is moved toward the bottom winding, the voltage on this coil increases as the voltage decreases across the top coil. Theoretically, a centered slug yields equal voltages across both coils. In practice leakage inductance prevents the null from dropping all the way to 0 V. With a centered slug, the series-opposing wired secondaries cancel yielding V13 = 0. Moving the slug up increases V13. Note that it is in-phase with with V1, the top winding, and 180o out of phase with V3, bottom winding. Moving the slug down from the center position increases V13. However, it is 180o out of phase with with V1, the top winding, and in-phase with V3, bottom winding. Moving the slug from top to bottom shows a minimum at the center point, with a 180o phase reversal in passing the center. Review • Transformers can be used to transform impedance as well as voltage and current. When this is done to improve power transfer to a load, it is called impedance matching. • A Potential Transformer (PT) is a special instrument transformer designed to provide a precise voltage step-down ratio for voltmeters measuring high power system voltages. • A Current Transformer (CT) is another special instrument transformer designed to step down the current through a power line to a safe level for an ammeter to measure. • An air-core transformer is one lacking a ferromagnetic core. • A Tesla Coil is a resonant, air-core, step-up transformer designed to produce very high AC voltages at high frequency. • A saturable reactor is a special type of inductor, the inductance of which can be controlled by the DC current through a second winding around the same core. With enough DC current, the magnetic core can be saturated, decreasing the inductance of the power winding in a controlled fashion. • A Scott-T transformer converts 3-φ power to 2-φ power and vice versa. • A linear variable differential transformer, also known as an LVDT, is a distance measuring device. It has a movable ferromagnetic core to vary the coupling between the excited primary and a pair of secondaries.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/10%3A_Transformers/10.07%3A_Special_Transformers_and_Applications.txt
Power Capacity As has already been observed, transformers must be well designed in order to achieve acceptable power coupling, tight voltage regulation, and low exciting current distortion. Also, transformers must be designed to carry the expected values of primary and secondary winding current without any trouble. This means the winding conductors must be made of the proper gauge wire to avoid any heating problems. An ideal transformer would have perfect coupling (no leakage inductance), perfect voltage regulation, perfectly sinusoidal exciting current, no hysteresis or eddy current losses, and wire thick enough to handle any amount of current. Unfortunately, the ideal transformer would have to be infinitely large and heavy to meet these design goals. Thus, in the business of practical transformer design, compromises must be made. Additionally, winding conductor insulation is a concern where high voltages are encountered, as they often are in step-up and step-down power distribution transformers. Not only do the windings have to be well insulated from the iron core, but each winding has to be sufficiently insulated from the other in order to maintain electrical isolation between windings. Respecting these limitations, transformers are rated for certain levels of primary and secondary winding voltage and current, though the current rating is usually derived from a volt-amp (VA) rating assigned to the transformer. For example, take a step-down transformer with a primary voltage rating of 120 volts, a secondary voltage rating of 48 volts, and a VA rating of 1 kVA (1000 VA). The maximum winding currents can be determined as such: kVA (1000 VA). The maximum winding currents can be determined as such: Sometimes windings will bear current ratings in amps, but this is typically seen on small transformers. Large transformers are almost always rated in terms of winding voltage and VA or kVA.kVA. Energy losses When transformers transfer power, they do so with a minimum of loss. As it was stated earlier, modern power transformer designs typically exceed 95% efficiency. It is good to know where some of this lost power goes, however, and what causes it to be lost. There is, of course, power lost due to resistance of the wire windings. Unless superconducting wires are used, there will always be power dissipated in the form of heat through the resistance of current-carrying conductors. Because transformers require such long lengths of wire, this loss can be a significant factor. Increasing the gauge of the winding wire is one way to minimize this loss, but only with substantial increases in cost, size, and weight. Resistive losses aside, the bulk of transformer power loss is due to magnetic effects in the core. Perhaps the most significant of these “core losses” is eddy-current loss, which is resistive power dissipation due to the passage of induced currents through the iron of the core. Because iron is a conductor of electricity as well as being an excellent “conductor” of magnetic flux, there will be currents induced in the iron just as there are currents induced in the secondary windings from the alternating magnetic field. These induced currents—as described by the perpendicularity clause of Faraday’s Law —tend to circulate through the cross-section of the core perpendicularly to the primary winding turns. Their circular motion gives them their unusual name: like eddies in a stream of water that circulate rather than move in straight lines. Iron is a fair conductor of electricity, but not as good as the copper or aluminum from which wire windings are typically made. Consequently, these “eddy currents” must overcome significant electrical resistance as they circulate through the core. In overcoming the resistance offered by the iron, they dissipate power in the form of heat. Hence, we have a source of inefficiency in the transformer that is difficult to eliminate. This phenomenon is so pronounced that it is often exploited as a means of heating ferrous (iron-containing) materials. The photograph of (Figure below) shows an “induction heating” unit raising the temperature of a large pipe section. Loops of wire covered by high-temperature insulation encircle the pipe’s circumference, inducing eddy currents within the pipe wall by electromagnetic induction. In order to maximize the eddy current effect, high-frequency alternating current is used rather than power line frequency (60 Hz). The box units at the right of the picture produce the high-frequency AC and control the amount of current in the wires to stabilize the pipe temperature at a pre-determined “set-point.” Induction heating: Primary insulated winding induces current into lossy iron pipe (secondary). The main strategy in mitigating these wasteful eddy currents in transformer cores is to form the iron core in sheets, each sheet covered with an insulating varnish so that the core is divided up into thin slices. The result is very little width in the core for eddy currents to circulate in: (Figure below) Dividing the iron core into thin insulated laminations minimizes eddy current loss. Laminated cores like the one shown here are standard in almost all low-frequency transformers. Recall from the photograph of the transformer cut in half that the iron core was composed of many thin sheets rather than one solid piece. Eddy current losses increase with frequency, so transformers designed to run on higher-frequency power (such as 400 Hz, used in many military and aircraft applications) must use thinner laminations to keep the losses down to a respectable minimum. This has the undesirable effect of increasing the manufacturing cost of the transformer. Another, similar technique for minimizing eddy current losses which works better for high-frequency applications is to make the core out of iron powder instead of thin iron sheets. Like the lamination sheets, these granules of iron are individually coated in an electrically insulating material, which makes the core nonconductive except for within the width of each granule. Powdered iron cores are often found in transformers handling radio-frequency currents. Another “core loss” is that of magnetic hysteresis. All ferromagnetic materials tend to retain some degree of magnetization after exposure to an external magnetic field. This tendency to stay magnetized is called “hysteresis,” and it takes a certain investment in energy to overcome this opposition to change every time the magnetic field produced by the primary winding changes polarity (twice per AC cycle). This type of loss can be mitigated through good core material selection (choosing a core alloy with low hysteresis, as evidenced by a “thin” B/H hysteresis curve), and designing the core for minimum flux density (large cross-sectional area). Transformer energy losses tend to worsen with increasing frequency. The skin effect within winding conductors reduces the available cross-sectional area for electron flow, thereby increasing effective resistance as the frequency goes up and creating more power lost through resistive dissipation. Magnetic core losses are also exaggerated with higher frequencies, eddy currents and hysteresis effects becoming more severe. For this reason, transformers of significant size are designed to operate efficiently in a limited range of frequencies. In most power distribution systems where the line frequency is very stable, one would think excessive frequency would never pose a problem. Unfortunately it does, in the form of harmonics created by nonlinear loads. As we’ve seen in earlier chapters, nonsinusoidal waveforms are equivalent to additive series of multiple sinusoidal waveforms at different amplitudes and frequencies. In power systems, these other frequencies are whole-number multiples of the fundamental (line) frequency, meaning that they will always be higher, not lower, than the design frequency of the transformer. In significant measure, they can cause severe transformer overheating. Power transformers can be engineered to handle certain levels of power system harmonics, and this capability is sometimes denoted with a “K factor” rating. Stray capacitance and inductance Aside from power ratings and power losses, transformers often harbor other undesirable limitations which circuit designers must be made aware of. Like their simpler counterparts—inductors—transformers exhibit capacitance due to the insulation dielectric between conductors: from winding to winding, turn to turn (in a single winding), and winding to core. Usually this capacitance is of no concern in a power application, but small signal applications (especially those of high frequency) may not tolerate this quirk well. Also, the effect of having capacitance along with the windings’ designed inductance gives transformers the ability to resonate at a particular frequency, definitely a design concern in signal applications where the applied frequency may reach this point (usually the resonant frequency of a power transformer is well beyond the frequency of the AC power it was designed to operate on). Flux containment (making sure a transformer’s magnetic flux doesn’t escape so as to interfere with another device, and making sure other devices’ magnetic flux is shielded from the transformer core) is another concern shared both by inductors and transformers. Closely related to the issue of flux containment is leakage inductance. We’ve already seen the detrimental effects of leakage inductance on voltage regulation with SPICE simulations early in this chapter. Because leakage inductance is equivalent to an inductance connected in series with the transformer’s winding, it manifests itself as a series impedance with the load. Thus, the more current drawn by the load, the less voltage available at the secondary winding terminals. Usually, good voltage regulation is desired in transformer design, but there are exceptional applications. As was stated before, discharge lighting circuits require a step-up transformer with “loose” (poor) voltage regulation to ensure reduced voltage after the establishment of an arc through the lamp. One way to meet this design criterion is to engineer the transformer with flux leakage paths for magnetic flux to bypass the secondary winding(s). The resulting leakage flux will produce leakage inductance, which will in turn produce the poor regulation needed for discharge lighting. Core saturation Transformers are also constrained in their performance by the magnetic flux limitations of the core. For ferromagnetic core transformers, we must be mindful of the saturation limits of the core. Remember that ferromagnetic materials cannot support infinite magnetic flux densities: they tend to “saturate” at a certain level (dictated by the material and core dimensions), meaning that further increases in magnetic field force (mmf) do not result in proportional increases in magnetic field flux (Φ). When a transformer’s primary winding is overloaded from excessive applied voltage, the core flux may reach saturation levels during peak moments of the AC sinewave cycle. If this happens, the voltage induced in the secondary winding will no longer match the wave-shape as the voltage powering the primary coil. In other words, the overloaded transformer will distort the waveshape from primary to secondary windings, creating harmonics in the secondary winding’s output. As we discussed before, harmonic content in AC power systems typically causes problems. Special transformers known as peaking transformers exploit this principle to produce brief voltage pulses near the peaks of the source voltage waveform. The core is designed to saturate quickly and sharply, at voltage levels well below peak. This results in a severely cropped sine-wave flux waveform, and secondary voltage pulses only when the flux is changing (below saturation levels): (Figure below) Voltage and flux waveforms for a peaking transformer. Another cause of abnormal transformer core saturation is operation at frequencies lower than normal. For example, if a power transformer designed to operate at 60 Hz is forced to operate at 50 Hz instead, the flux must reach greater peak levels than before in order to produce the same opposing voltage needed to balance against the source voltage. This is true even if the source voltage is the same as before. (Figure below) Magnetic flux is higher in a transformer core driven by 50 Hz as compared to 60 Hz for the same voltage. Since instantaneous winding voltage is proportional to the instantaneous magnetic flux’s rate of change in a transformer, a voltage waveform reaching the same peak value, but taking a longer amount of time to complete each half-cycle, demands that the flux maintain the same rate of change as before, but for longer periods of time. Thus, if the flux has to climb at the same rate as before, but for longer periods of time, it will climb to a greater peak value. (Figure below) Mathematically, this is another example of calculus in action. Because the voltage is proportional to the flux’s rate-of-change, we say that the voltage waveform is the derivative of the flux waveform, “derivative” being that calculus operation defining one mathematical function (waveform) in terms of the rate-of-change of another. If we take the opposite perspective, though, and relate the original waveform to its derivative, we may call the original waveform the integral of the derivative waveform. In this case, the voltage waveform is the derivative of the flux waveform, and the flux waveform is the integral of the voltage waveform. The integral of any mathematical function is proportional to the area accumulated underneath the curve of that function. Since each half-cycle of the 50 Hz waveform accumulates more area between it and the zero line of the graph than the 60 Hz waveform will—and we know that the magnetic flux is the integral of the voltage—the flux will attain higher values in Figure below. Flux changing at the same rate rises to a higher level at 50 Hz than at 60 Hz. Yet another cause of transformer saturation is the presence of DC current in the primary winding. Any amount of DC voltage dropped across the primary winding of a transformer will cause additional magnetic flux in the core. This additional flux “bias” or “offset” will push the alternating flux waveform closer to saturation in one half-cycle than the other. (Figure below) DC in primary, shifts the waveform peaks toward the upper saturation limit. For most transformers, core saturation is a very undesirable effect, and it is avoided through good design: engineering the windings and core so that magnetic flux densities remain well below the saturation levels. This ensures that the relationship between mmf and Φ is more linear throughout the flux cycle, which is good because it makes for less distortion in the magnetization current waveform. Also, engineering the core for low flux densities provides a safe margin between the normal flux peaks and the core saturation limits to accommodate occasional, abnormal conditions such as frequency variation and DC offset. Inrush current When a transformer is initially connected to a source of AC voltage, there may be a substantial surge of current through the primary winding called inrush current. (Figure below) This is analogous to the inrush current exhibited by an electric motor that is started up by sudden connection to a power source, although transformer inrush is caused by a different phenomenon. We know that the rate of change of instantaneous flux in a transformer core is proportional to the instantaneous voltage drop across the primary winding. Or, as stated before, the voltage waveform is the derivative of the flux waveform, and the flux waveform is the integral of the voltage waveform. In a continuously-operating transformer, these two waveforms are phase-shifted by 90o. (Figure below) Since flux (Φ) is proportional to the magnetomotive force (mmf) in the core, and the mmf is proportional to winding current, the current waveform will be in-phase with the flux waveform, and both will be lagging the voltage waveform by 90o: Continuous steady-state operation: Magnetic flux, like current, lags applied voltage by 90o. Let us suppose that the primary winding of a transformer is suddenly connected to an AC voltage source at the exact moment in time when the instantaneous voltage is at its positive peak value. In order for the transformer to create an opposing voltage drop to balance against this applied source voltage, a magnetic flux of rapidly increasing value must be generated. The result is that winding current increases rapidly, but actually no more rapidly than under normal conditions: (Figure below) Connecting transformer to line at AC volt peak: Flux increases rapidly from zero, same as steady-state operation. Both core flux and coil current start from zero and build up to the same peak values experienced during continuous operation. Thus, there is no “surge” or “inrush” or current in this scenario. (Figure above) Alternatively, let us consider what happens if the transformer’s connection to the AC voltage source occurs at the exact moment in time when the instantaneous voltage is at zero. During continuous operation (when the transformer has been powered for quite some time), this is the point in time where both flux and winding current are at their negative peaks, experiencing zero rate-of-change (dΦ/dt = 0 and di/dt = 0). As the voltage builds to its positive peak, the flux and current waveforms build to their maximum positive rates-of-change, and on upward to their positive peaks as the voltage descends to a level of zero: Starting at e=0 V is not the same as running continuously in Figure above. These expected waveforms are incorrect– Φ and i should start at zero. A significant difference exists, however, between continuous-mode operation and the sudden starting condition assumed in this scenario: during continuous operation, the flux and current levels were at their negative peaks when voltage was at its zero point; in a transformer that has been sitting idle, however, both magnetic flux and winding current should start at zero. When the magnetic flux increases in response to a rising voltage, it will increase from zero upward, not from a previously negative (magnetized) condition as we would normally have in a transformer that’s been powered for awhile. Thus, in a transformer that’s just “starting,” the flux will reach approximately twice its normal peak magnitude as it “integrates” the area under the voltage waveform’s first half-cycle: (Figure below) Starting at e=0 V, Φ starts at initial condition Φ=0, increasing to twice the normal value, assuming it doesn’t saturate the core. In an ideal transformer, the magnetizing current would rise to approximately twice its normal peak value as well, generating the necessary mmf to create this higher-than-normal flux. However, most transformers aren’t designed with enough of a margin between normal flux peaks and the saturation limits to avoid saturating in a condition like this, and so the core will almost certainly saturate during this first half-cycle of voltage. During saturation, disproportionate amounts of mmf are needed to generate magnetic flux. This means that winding current, which creates the mmf to cause flux in the core, will disproportionately rise to a value easily exceeding twice its normal peak: (Figure below) Starting at e=0 V, Current also increases to twice the normal value for an unsaturated core, or considerably higher in the (designed for) case of saturation. This is the mechanism causing inrush current in a transformer’s primary winding when connected to an AC voltage source. As you can see, the magnitude of the inrush current strongly depends on the exact time that electrical connection to the source is made. If the transformer happens to have some residual magnetism in its core at the moment of connection to the source, the inrush could be even more severe. Because of this, transformer overcurrent protection devices are usually of the “slow-acting” variety, so as to tolerate current surges such as this without opening the circuit. Heat and Noise In addition to unwanted electrical effects, transformers may also exhibit undesirable physical effects, the most notable being the production of heat and noise. Noise is primarily a nuisance effect, but heat is a potentially serious problem because winding insulation will be damaged if allowed to overheat. Heating may be minimized by good design, ensuring that the core does not approach saturation levels, that eddy currents are minimized, and that the windings are not overloaded or operated too close to maximum ampacity. Large power transformers have their core and windings submerged in an oil bath to transfer heat and muffle noise, and also to displace moisture which would otherwise compromise the integrity of the winding insulation. Heat-dissipating “radiator” tubes on the outside of the transformer case provide a convective oil flow path to transfer heat from the transformer’s core to ambient air: (Figure below) Large power transformers are submerged in heat dissipating insulating oil. Oil-less, or “dry,” transformers are often rated in terms of maximum operating temperature “rise” (temperature increase beyond ambient) according to a letter-class system: A, B, F, or H. These letter codes are arranged in order of lowest heat tolerance to highest: • Class A: No more than 55o Celsius winding temperature rise, at 40o Celsius (maximum) ambient air temperature. • Class B: No more than 80o Celsius winding temperature rise, at 40o Celsius (maximum)ambient air temperature. • Class F: No more than 115o Celsius winding temperature rise, at 40o Celsius (maximum)ambient air temperature. • Class H: No more than 150o Celsius winding temperature rise, at 40o Celsius (maximum)ambient air temperature. Audible noise is an effect primarily originating from the phenomenon of magnetostriction: the slight change of length exhibited by a ferromagnetic object when magnetized. The familiar “hum” heard around large power transformers is the sound of the iron core expanding and contracting at 120 Hz (twice the system frequency, which is 60 Hz in the United States)—one cycle of core contraction and expansion for every peak of the magnetic flux waveform—plus noise created by mechanical forces between primary and secondary windings. Again, maintaining low magnetic flux levels in the core is the key to minimizing this effect, which explains why ferroresonant transformers—which must operate in saturation for a large portion of the current waveform—operate both hot and noisy. Another noise-producing phenomenon in power transformers is the physical reaction force between primary and secondary windings when heavily loaded. If the secondary winding is open-circuited, there will be no current through it, and consequently no magneto-motive force (mmf) produced by it. However, when the secondary is “loaded” (current supplied to a load), the winding generates an mmf, which becomes counteracted by a “reflected” mmf in the primary winding to prevent core flux levels from changing. These opposing mmf’s generated between primary and secondary windings as a result of secondary (load) current produce a repulsive, physical force between the windings which will tend to make them vibrate. Transformer designers have to consider these physical forces in the construction of the winding coils, to ensure there is adequate mechanical support to handle the stresses. Under heavy load (high current) conditions, though, these stresses may be great enough to cause audible noise to emanate from the transformer. Review • Power transformers are limited in the amount of power they can transfer from primary to secondary winding(s). Large units are typically rated in VA (volt-amps) or kVA (kilo volt-amps). • Resistance in transformer windings contributes to inefficiency, as current will dissipate heat, wasting energy. • Magnetic effects in a transformer’s iron core also contribute to inefficiency. Among the effects are eddy currents (circulating induction currents in the iron core) and hysteresis (power lost due to overcoming the tendency of iron to magnetize in a particular direction). • Increased frequency results in increased power losses within a power transformer. The presence of harmonics in a power system is a source of frequencies significantly higher than normal, which may cause overheating in large transformers. • Both transformers and inductors harbor certain unavoidable amounts of capacitance due to wire insulation (dielectric) separating winding turns from the iron core and from each other. This capacitance can be significant enough to give the transformer a natural resonant frequency, which can be problematic in signal applications. • Leakage inductance is caused by magnetic flux not being 100% coupled between windings in a transformer. Any flux not involved with transferring energy from one winding to another will store and release energy, which is how (self-) inductance works. Leakage inductance tends to worsen a transformer’s voltage regulation (secondary voltage “sags” more for a given amount of load current). • Magnetic saturation of a transformer core may be caused by excessive primary voltage, operation at too low of a frequency, and/or by the presence of a DC current in any of the windings. Saturation may be minimized or avoided by conservative design, which provides an adequate margin of safety between peak magnetic flux density values and the saturation limits of the core. • Transformers often experience significant inrush currents when initially connected to an AC voltage source. Inrush current is most severe when connection to the AC source is made at the moment instantaneous source voltage is zero. • Noise is a common phenomenon exhibited by transformers—especially power transformers—and is primarily caused by magnetostriction of the core. Physical forces causing winding vibration may also generate noise under conditions of heavy (high current) secondary winding load.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/10%3A_Transformers/10.08%3A_Practical_Considerations_-_Transformers.txt
Consider a circuit for a single-phase AC power system, where a 120 volt, 60 Hz AC voltage source is delivering power to a resistive load: (Figure below) AC source drives a purely resistive load. In this example, the current to the load would be 2 amps, RMS. The power dissipated at the load would be 240 watts. Because this load is purely resistive (no reactance), the current is in phase with the voltage, and calculations look similar to that in an equivalent DC circuit. If we were to plot the voltage, current, and power waveforms for this circuit, it would look like Figure below. Current is in phase with voltage in a resistive circuit. Note that the waveform for power is always positive, never negative for this resistive circuit. This means that power is always being dissipated by the resistive load, and never returned to the source as it is with reactive loads. If the source were a mechanical generator, it would take 240 watts worth of mechanical energy (about 1/3 horsepower) to turn the shaft. Also note that the waveform for power is not at the same frequency as the voltage or current! Rather, its frequency is double that of either the voltage or current waveforms. This different frequency prohibits our expression of power in an AC circuit using the same complex (rectangular or polar) notation as used for voltage, current, and impedance, because this form of mathematical symbolism implies unchanging phase relationships. When frequencies are not the same, phase relationships constantly change. As strange as it may seem, the best way to proceed with AC power calculations is to use scalar notation, and to handle any relevant phase relationships with trigonometry. For comparison, let’s consider a simple AC circuit with a purely reactive load in Figure below. AC circuit with a purely reactive (inductive) load. Power is not dissipated in a purely reactive load. Though it is alternately absorbed from and returned to the source. Note that the power alternates equally between cycles of positive and negative. (Figure above) This means that power is being alternately absorbed from and returned to the source. If the source were a mechanical generator, it would take (practically) no net mechanical energy to turn the shaft, because no power would be used by the load. The generator shaft would be easy to spin, and the inductor would not become warm as a resistor would. Now, let’s consider an AC circuit with a load consisting of both inductance and resistance in Figure below. At a frequency of 60 Hz, the 160 millihenrys of inductance give us 60.319 Ω of inductive reactance. This reactance combines with the 60 Ω of resistance to form a total load impedance of 60 + j60.319 Ω, or 85.078 Ω ∠ 45.152o. If we’re not concerned with phase angles (which we’re not at this point), we may calculate current in the circuit by taking the polar magnitude of the voltage source (120 volts) and dividing it by the polar magnitude of the impedance (85.078 Ω). With a power supply voltage of 120 volts RMS, our load current is 1.410 amps. This is the figure an RMS ammeter would indicate if connected in series with the resistor and inductor. We already know that reactive components dissipate zero power, as they equally absorb power from, and return power to, the rest of the circuit. Therefore, any inductive reactance in this load will likewise dissipate zero power. The only thing left to dissipate power here is the resistive portion of the load impedance. If we look at the waveform plot of voltage, current, and total power for this circuit, we see how this combination works in Figure below. A combined resistive/reactive circuit dissipates more power than it returns to the source. The reactance dissipates no power; though, the resistor does. As with any reactive circuit, the power alternates between positive and negative instantaneous values over time. In a purely reactive circuit that alternation between positive and negative power is equally divided, resulting in a net power dissipation of zero. However, in circuits with mixed resistance and reactance like this one, the power waveform will still alternate between positive and negative, but the amount of positive power will exceed the amount of negative power. In other words, the combined inductive/resistive load will consume more power than it returns back to the source. Looking at the waveform plot for power, it should be evident that the wave spends more time on the positive side of the center line than on the negative, indicating that there is more power absorbed by the load than there is returned to the circuit. What little returning of power that occurs is due to the reactance; the imbalance of positive versus negative power is due to the resistance as it dissipates energy outside of the circuit (usually in the form of heat). If the source were a mechanical generator, the amount of mechanical energy needed to turn the shaft would be the amount of power averaged between the positive and negative power cycles. Mathematically representing power in an AC circuit is a challenge, because the power wave isn’t at the same frequency as voltage or current. Furthermore, the phase angle for power means something quite different from the phase angle for either voltage or current. Whereas the angle for voltage or current represents a relative shift in timing between two waves, the phase angle for power represents a ratio between power dissipated and power returned. Because of this way in which AC power differs from AC voltage or current, it is actually easier to arrive at figures for power by calculating with scalar quantities of voltage, current, resistance, and reactance than it is to try to derive it from vector, or complex quantities of voltage, current, and impedance that we’ve worked with so far. Review • In a purely resistive circuit, all circuit power is dissipated by the resistor(s). Voltage and current are in phase with each other. • In a purely reactive circuit, no circuit power is dissipated by the load(s). Rather, power is alternately absorbed from and returned to the AC source. Voltage and current are 90o out of phase with each other. • In a circuit consisting of resistance and reactance mixed, there will be more power dissipated by the load(s) than returned, but some power will definitely be dissipated and some will merely be absorbed and returned. Voltage and current in such a circuit will be out of phase by a value somewhere between 0o and 90o.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/11%3A_Power_Factor/11.01%3A_Power_in_Resistive_and_Reactive_AC_circuits.txt
Reactive Power We know that reactive loads such as inductors and capacitors dissipate zero power, yet the fact that they drop voltage and draw current gives the deceptive impression that they actually do dissipate power. This “phantom power” is called reactive power, and it is measured in a unit called Volt-Amps-Reactive (VAR), rather than watts. The mathematical symbol for reactive power is (unfortunately) the capital letter Q. True Power The actual amount of power being used, or dissipated, in a circuit is called true power, and it is measured in watts (symbolized by the capital letter P, as always). Apparent Power The combination of reactive power and true power is called apparent power, and it is the product of a circuit’s voltage and current, without reference to phase angle. Apparent power is measured in the unit of Volt-Amps (VA) and is symbolized by the capital letter S. Calculating for Reactive, True, or Apparent Power As a rule, true power is a function of a circuit’s dissipative elements, usually resistances (R). Reactive power is a function of a circuit’s reactance (X). Apparent power is a function of a circuit’s total impedance (Z). Since we’re dealing with scalar quantities for power calculation, any complex starting quantities such as voltage, current, and impedance must be represented by their polar magnitudes, not by real or imaginary rectangular components. For instance, if I’m calculating true power from current and resistance, I must use the polar magnitude for current, and not merely the “real” or “imaginary” portion of the current. If I’m calculating apparent power from voltage and impedance, both of these formerly complex quantities must be reduced to their polar magnitudes for the scalar arithmetic. There are several power equations relating the three types of power to resistance, reactance, and impedance (all using scalar quantities): Please note that there are two equations each for the calculation of true and reactive power. There are three equations available for the calculation of apparent power, P=IE being useful only for that purpose. Examine the following circuits and see how these three types of power interrelate for: a purely resistive load in Figurebelow, a purely reactive load in Figure below, and a resistive/reactive load in Figure below. Resistive Load Only True power, reactive power, and apparent power for a purely resistive load. Reactive Load Only True power, reactive power, and apparent power for a purely reactive load. Resistive/Reactive Load True power, reactive power, and apparent power for a resistive/reactive load. The Power Triangle These three types of power—true, reactive, and apparent—relate to one another in trigonometric form. We call this the power triangle: (Figure below). Power triangle relating appearant power to true power and reactive power. Using the laws of trigonometry, we can solve for the length of any side (amount of any type of power), given the lengths of the other two sides, or the length of one side and an angle. Review • Power dissipated by a load is referred to as true power. True power is symbolized by the letter P and is measured in the unit of Watts (W). • Power merely absorbed and returned in load due to its reactive properties is referred to as reactive power. Reactive power is symbolized by the letter Q and is measured in the unit of Volt-Amps-Reactive (VAR). • Total power in an AC circuit, both dissipated and absorbed/returned is referred to as apparent power. Apparent power is symbolized by the letter S and is measured in the unit of Volt-Amps (VA). • These three types of power are trigonometrically related to one another. In a right triangle, P = adjacent length, Q = opposite length, and S = hypotenuse length. The opposite angle is equal to the circuit’s impedance (Z) phase angle.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/11%3A_Power_Factor/11.02%3A_True%2C_Reactive%2C_and_Apparent_Power.txt
As was mentioned before, the angle of this “power triangle” graphically indicates the ratio between the amount of dissipated (or consumed) power and the amount of absorbed/returned power. It also happens to be the same angle as that of the circuit’s impedance in polar form. When expressed as a fraction, this ratio between true power and apparent power is called the power factor for this circuit. Because true power and apparent power form the adjacent and hypotenuse sides of a right triangle, respectively, the power factor ratio is also equal to the cosine of that phase angle. Using values from the last example circuit: It should be noted that power factor, like all ratio measurements, is a unitless quantity. For the purely resistive circuit, the power factor is 1 (perfect), because the reactive power equals zero. Here, the power triangle would look like a horizontal line, because the opposite (reactive power) side would have zero length. For the purely inductive circuit, the power factor is zero, because true power equals zero. Here, the power triangle would look like a vertical line, because the adjacent (true power) side would have zero length. The same could be said for a purely capacitive circuit. If there are no dissipative (resistive) components in the circuit, then the true power must be equal to zero, making any power in the circuit purely reactive. The power triangle for a purely capacitive circuit would again be a vertical line (pointing down instead of up as it was for the purely inductive circuit). Power factor can be an important aspect to consider in an AC circuit because of any power factor less than 1 means that the circuit’s wiring has to carry more current than what would be necessary with zero reactance in the circuit to deliver the same amount of (true) power to the resistive load. If our last example circuit had been purely resistive, we would have been able to deliver a full 169.256 watts to the load with the same 1.410 amps of current, rather than the mere 119.365 watts that it is presently dissipating with that same current quantity. The poor power factor makes for an inefficient power delivery system. Poor power factor can be corrected, paradoxically, by adding another load to the circuit drawing an equal and opposite amount of reactive power, to cancel out the effects of the load’s inductive reactance. Inductive reactance can only be canceled by capacitive reactance, so we have to add a capacitor in parallel to our example circuit as the additional load. The effect of these two opposing reactances in parallel is to bring the circuit’s total impedance equal to its total resistance (to make the impedance phase angle equal, or at least closer, to zero). Since we know that the (uncorrected) reactive power is 119.998 VAR (inductive), we need to calculate the correct capacitor size to produce the same quantity of (capacitive) reactive power. Since this capacitor will be directly in parallel with the source (of known voltage), we’ll use the power formula which starts from voltage and reactance: Let’s use a rounded capacitor value of 22 µF and see what happens to our circuit: (Figure below) Parallel capacitor corrects lagging power factor of inductive load. V2 and node numbers: 0, 1, 2, and 3 are SPICE related, and may be ignored for the moment. The power factor for the circuit, overall, has been substantially improved. The main current has been decreased from 1.41 amps to 994.7 milliamps, while the power dissipated at the load resistor remains unchanged at 119.365 watts. The power factor is much closer to being 1: Since the impedance angle is still a positive number, we know that the circuit, overall, is still more inductive than it is capacitive. If our power factor correction efforts had been perfectly on-target, we would have arrived at an impedance angle of exactly zero, or purely resistive. If we had added too large of a capacitor in parallel, we would have ended up with an impedance angle that was negative, indicating that the circuit was more capacitive than inductive. A SPICE simulation of the circuit of (Figure above) shows total voltage and total current are nearly in phase. The SPICE circuit file has a zero volt voltage-source (V2) in series with the capacitor so that the capacitor current may be measured. The start time of 200 msec ( instead of 0) in the transient analysis statement allows the DC conditions to stabilize before collecting data. See SPICE listing “pf.cir power factor”. The Nutmeg plot of the various currents with respect to the applied voltage Vtotal is shown in (Figure below). The reference is Vtotal, to which all other measurements are compared. This is because the applied voltage, Vtotal, appears across the parallel branches of the circuit. There is no single current common to all components. We can compare those currents to Vtotal. Zero phase angle due to in-phase Vtotal and Itotal . The lagging IL with respect to Vtotal is corrected by a leading IC . Note that the total current (Itotal) is in phase with the applied voltage (Vtotal), indicating a phase angle of near zero. This is no coincidence. Note that the lagging current, IL of the inductor would have caused the total current to have a lagging phase somewhere between (Itotal) and IL. However, the leading capacitor current, IC, compensates for the lagging inductor current. The result is a total current phase-angle somewhere between the inductor and capacitor currents. Moreover, that total current (Itotal) was forced to be in-phase with the total applied voltage (Vtotal), by the calculation of an appropriate capacitor value. Since the total voltage and current are in phase, the product of these two waveforms, power, will always be positive throughout a 60 Hz cycle, real power as in Figure above. Had the phase-angle not been corrected to zero (PF=1), the product would have been negative where positive portions of one waveform overlapped negative portions of the other as in Figure above. Negative power is fed back to the generator. It cannont be sold; though, it does waste power in the resistance of electric lines between load and generator. The parallel capacitor corrects this problem. Note that reduction of line losses applies to the lines from the generator to the point where the power factor correction capacitor is applied. In other words, there is still circulating current between the capacitor and the inductive load. This is not normally a problem because the power factor correction is applied close to the offending load, like an induction motor. It should be noted that too much capacitance in an AC circuit will result in a low power factor just as well as too much inductance. You must be careful not to over-correct when adding capacitance to an AC circuit. You must also be very careful to use the proper capacitors for the job (rated adequately for power system voltages and the occasional voltage spike from lightning strikes, for continuous AC service, and capable of handling the expected levels of current). If a circuit is predominantly inductive, we say that its power factor is lagging (because the current wave for the circuit lags behind the applied voltage wave). Conversely, if a circuit is predominantly capacitive, we say that its power factor is leading. Thus, our example circuit started out with a power factor of 0.705 lagging, and was corrected to a power factor of 0.999 lagging. Review • Poor power factor in an AC circuit may be “corrected”, or re-established at a value close to 1, by adding a parallel reactance opposite the effect of the load’s reactance. If the load’s reactance is inductive in nature (which is almost always will be), parallel capacitance is what is needed to correct poor power factor.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/11%3A_Power_Factor/11.03%3A_Calculating_Power_Factor.txt
When the need arises to correct for poor power factor in an AC power system, you probably won’t have the luxury of knowing the load’s exact inductance in henrys to use for your calculations. You may be fortunate enough to have an instrument called a power factor meter to tell you what the power factor is (a number between 0 and 1), and the apparent power (which can be figured by taking a voltmeter reading in volts and multiplying by an ammeter reading in amps). In less favorable circumstances, you may have to use an oscilloscope to compare voltage and current waveforms, measuring phase shift in degreesand calculating power factor by the cosine of that phase shift. Most likely, you will have access to a wattmeter for measuring true power, whose reading you can compare against a calculation of apparent power (from multiplying total voltage and total current measurements). From the values of true and apparent power, you can determine reactive power and power factor. Let’s do an example problem to see how this works: (Figure below) Wattmeter reads true power; product of voltmeter and ammeter readings yields apparent power. How to Calculate the Apparent Power in kVA First, we need to calculate the apparent power in kVA. We can do this by multiplying load voltage by load current: As we can see, 2.308 kVA is a much larger figure than 1.5 kW, which tells us that the power factor in this circuit is rather poor (substantially less than 1). Now, we figure the power factor of this load by dividing the true power by the apparent power: Using this value for power factor, we can draw a power triangle, and from that determine the reactive power of this load: (Figure below) Reactive power may be calculated from true power and apparent power. How to Use the Pythagorean Theorem to Determine Unknown Triangle Quantity To determine the unknown (reactive power) triangle quantity, we use the Pythagorean Theorem “backwards,” given the length of the hypotenuse (apparent power) and the length of the adjacent side (true power): How to Correct Power Factor with a Capacitor If this load is an electric motor or most any other industrial AC load, it will have a lagging (inductive) power factor, which means that we’ll have to correct for it with a capacitor of appropriate size, wired in parallel. Now that we know the amount of reactive power (1.754 kVAR), we can calculate the size of the capacitor needed to counteract its effects: Rounding this answer off to 80 µF, we can place that size of capacitor in the circuit and calculate the results: (Figure below) Parallel capacitor corrects lagging (inductive) load. An 80 µF capacitor will have a capacitive reactance of 33.157 Ω, giving a current of 7.238 amps, and a corresponding reactive power of 1.737 kVAR (for the capacitor only). Since the capacitor’s current is 180oout of phase from the load’s inductive contribution to current draw, the capacitor’s reactive power will directly subtract from the load’s reactive power, resulting in: This correction, of course, will not change the amount of true power consumed by the load, but it will result in a substantial reduction of apparent power, and of the total current drawn from the 240 Volt source: (Figure below) Power triangle before and after capacitor correction. The new apparent power can be found from the true and new reactive power values, using the standard form of the Pythagorean Theorem:
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/11%3A_Power_Factor/11.04%3A_Practical_Power_Factor_Correction.txt
AC electromechanical meter movements come in two basic arrangements: those based on DC movement designs, and those engineered specifically for AC use. Permanent-magnet moving coil (PMMC) meter movements will not work correctly if directly connected to alternating current, because the direction of needle movement will change with each half-cycle of the AC. (Figure below) Permanent-magnet meter movements, like permanent-magnet motors, are devices whose motion depends on the polarity of the applied voltage (or, you can think of it in terms of the direction of the current). Passing AC through this D’Arsonval meter movement causes useless flutter of the needle. In order to use a DC-style meter movement such as the D’Arsonval design, the alternating current must be rectified into DC. This is most easily accomplished through the use of devices called diodes. We saw diodes used in an example circuit demonstrating the creation of harmonic frequencies from a distorted (or rectified) sine wave. Without going into elaborate detail over how and why diodes work as they do, just remember that they each act like a one-way valve for electrons to flow: acting as a conductor for one polarity and an insulator for another. Oddly enough, the arrowhead in each diode symbol points against the permitted direction of electron flow rather than with it as one might expect. Arranged in a bridge, four diodes will serve to steer AC through the meter movement in a constant direction throughout all portions of the AC cycle: (Figure below) Passing AC through this Rectified AC meter movement will drive it in one direction. Another strategy for a practical AC meter movement is to redesign the movement without the inherent polarity sensitivity of the DC types. This means avoiding the use of permanent magnets. Probably the simplest design is to use a nonmagnetized iron vane to move the needle against spring tension, the vane being attracted toward a stationary coil of wire energized by the AC quantity to be measured as in Figure below. Iron-vane electromechanical meter movement. Electrostatic attraction between two metal plates separated by an air gap is an alternative mechanism for generating a needle-moving force proportional to applied voltage. This works just as well for AC as it does for DC, or should I say, just as poorly! The forces involved are very small, much smaller than the magnetic attraction between an energized coil and an iron vane, and as such these “electrostatic” meter movements tend to be fragile and easily disturbed by physical movement. But, for some high-voltage AC applications, the electrostatic movement is an elegant technology. If nothing else, this technology possesses the advantage of extremely high input impedance, meaning that no current need be drawn from the circuit under test. Also, electrostatic meter movements are capable of measuring very high voltages without need for range resistors or other, external apparatus. When a sensitive meter movement needs to be re-ranged to function as an AC voltmeter, series-connected “multiplier” resistors and/or resistive voltage dividers may be employed just as in DC meter design: (Figure below) Multiplier resistor (a) or resistive divider (b) scales the range of the basic meter movement. Capacitors may be used instead of resistors, though, to make voltmeter divider circuits. This strategy has the advantage of being non-dissipative (no true power consumed and no heat produced): (Figure below) AC voltmeter with capacitive divider. If the meter movement is electrostatic, and thus inherently capacitive in nature, a single “multiplier” capacitor may be connected in series to give it a greater voltage measuring range, just as a series-connected multiplier resistor gives a moving-coil (inherently resistive) meter movement a greater voltage range: (Figure below) An electrostatic meter movement may use a capacitive multiplier to multiply the scale of the basic meter movement. The Cathode Ray Tube (CRT) mentioned in the DC metering chapter is ideally suited for measuring AC voltages, especially if the electron beam is swept side-to-side across the screen of the tube while the measured AC voltage drives the beam up and down. A graphical representation of the AC wave shape and not just a measurement of magnitude can easily be had with such a device. However, CRT’s have the disadvantages of weight, size, significant power consumption, and fragility (being made of evacuated glass) working against them. For these reasons, electromechanical AC meter movements still have a place in practical usage. With some of the advantages and disadvantages of these meter movement technologies having been discussed already, there is another factor crucially important for the designer and user of AC metering instruments to be aware of. This is the issue of RMS measurement. As we already know, AC measurements are often cast in a scale of DC power equivalence, called RMS (Root-Mean-Square) for the sake of meaningful comparisons with DC and with other AC waveforms of varying shape. None of the meter movement technologies so far discussed inherently measure the RMS value of an AC quantity. Meter movements relying on the motion of a mechanical needle (“rectified” D’Arsonval, iron-vane, and electrostatic) all tend to mechanically average the instantaneous values into an overall average value for the waveform. This average value is not necessarily the same as RMS, although many times it is mistaken as such. Average and RMS values rate against each other as such for these three common waveform shapes: (Figure below) RMS, Average, and Peak-to-Peak values for sine, square, and triangle waves. Since RMS seems to be the kind of measurement most people are interested in obtaining with an instrument, and electromechanical meter movements naturally deliver average measurements rather than RMS, what are AC meter designers to do? Cheat, of course! Typically the assumption is made that the waveform shape to be measured is going to be sine (by far the most common, especially for power systems), and then the meter movement scale is altered by the appropriate multiplication factor. For sine waves we see that RMS is equal to 0.707 times the peak value while Average is 0.637 times the peak, so we can divide one figure by the other to obtain an average-to-RMS conversion factor of 1.109: In other words, the meter movement will be calibrated to indicate approximately 1.11 times higher than it would ordinarily (naturally) indicate with no special accommodations. It must be stressed that this “cheat” only works well when the meter is used to measure pure sine wave sources. Note that for triangle waves, the ratio between RMS and Average is not the same as for sine waves: With square waves, the RMS and Average values are identical! An AC meter calibrated to accurately read RMS voltage or current on a pure sine wave will not give the proper value while indicating the magnitude of anything other than a perfect sine wave. This includes triangle waves, square waves, or any kind of distorted sine wave. With harmonics becoming an ever-present phenomenon in large AC power systems, this matter of accurate RMS measurement is no small matter. The astute reader will note that I have omitted the CRT “movement” from the RMS/Average discussion. This is because a CRT with its practically weightless electron beam “movement” displays the Peak (or Peak-to-Peak if you wish) of an AC waveform rather than Average or RMS. Still, a similar problem arises: how do you determine the RMS value of a waveform from it? Conversion factors between Peak and RMS only hold so long as the waveform falls neatly into a known category of shape (sine, triangle, and square are the only examples with Peak/RMS/Average conversion factors given here!). One answer is to design the meter movement around the very definition of RMS: the effective heating value of an AC voltage/current as it powers a resistive load. Suppose that the AC source to be measured is connected across a resistor of known value, and the heat output of that resistor is measured with a device like a thermocouple. This would provide a far more direct measurement means of RMS than any conversion factor could, for it will work with ANY waveform shape whatsoever: (Figure below) Direct reading thermal RMS voltmeter accommodates any wave shape. While the device shown above is somewhat crude and would suffer from unique engineering problems of its own, the concept illustrated is very sound. The resistor converts the AC voltage or current quantity into a thermal (heat) quantity, effectively squaring the values in real-time. The system’s mass works to average these values by the principle of thermal inertia, and then the meter scale itself is calibrated to give an indication based on the square-root of the thermal measurement: perfect Root-Mean-Square indication all in one device! In fact, one major instrument manufacturer has implemented this technique into its high-end line of handheld electronic multimeters for “true-RMS” capability. Calibrating AC voltmeters and ammeters for different full-scale ranges of operation is much the same as with DC instruments: series “multiplier” resistors are used to give voltmeter movements higher range, and parallel “shunt” resistors are used to allow ammeter movements to measure currents beyond their natural range. However, we are not limited to these techniques as we were with DC: because we can use transformers with AC, meter ranges can be electromagnetically rather than resistively “stepped up” or “stepped down,” sometimes far beyond what resistors would have practically allowed for. Potential Transformers (PT’s) and Current Transformers (CT’s) are precision instrument devices manufactured to produce very precise ratios of transformation between primary and secondary windings. They can allow small, simple AC meter movements to indicate extremely high voltages and currents in power systems with accuracy and complete electrical isolation (something multiplier and shunt resistors could never do): (Figure below) (CT) Current transformer scales current down. (PT) Potential transformer scales voltage down. Shown here is a voltage and current meter panel from a three-phase AC system. The three “donut” current transformers (CT’s) can be seen in the rear of the panel. Three AC ammeters (rated 5 amps full-scale deflection each) on the front of the panel indicate current through each conductor going through a CT. As this panel has been removed from service, there are no current-carrying conductors threaded through the center of the CT “donuts” anymore: (Figure below) Toroidal current transformers scale high current levels down for application to 5 A full-scale AC ammeters. Because of the expense (and often large size) of instrument transformers, they are not used to scale AC meters for any applications other than high voltage and high current. For scaling a milliamp or microamp movement to a range of 120 volts or 5 amps, normal precision resistors (multipliers and shunts) are used, just as with DC. Review • Polarized (DC) meter movements must use devices called diodes to be able to indicate AC quantities. • Electromechanical meter movements, whether electromagnetic or electrostatic, naturally provide the average value of a measured AC quantity. These instruments may be ranged to indicate RMS value, but only if the shape of the AC waveform is precisely known beforehand! • So-called true RMS meters use different technology to provide indications representing the actual RMS (rather than skewed average or peak) of an AC waveform.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/12%3A_AC_Metering_Circuits/12.01%3A_AC_Voltmeters_and_Ammeters.txt
An important electrical quantity with no equivalent in DC circuits is frequency. Frequency measurement is very important in many applications of alternating current, especially in AC power systems designed to run efficiently at one frequency and one frequency only. If the AC is being generated by an electromechanical alternator, the frequency will be directly proportional to the shaft speed of the machine, and frequency could be measured simply by measuring the speed of the shaft. If frequency needs to be measured at some distance from the alternator, though, other means of measurement will be necessary. One simple but crude method of frequency measurement in power systems utilizes the principle of mechanical resonance. Every physical object possessing the property of elasticity (springiness) has an inherent frequency at which it will prefer to vibrate. The tuning fork is a great example of this: strike it once and it will continue to vibrate at a tone specific to its length. Longer tuning forks have lower resonant frequencies: their tones will be lower on the musical scale than shorter forks. Imagine a row of progressively-sized tuning forks arranged side-by-side. They are all mounted on a common base, and that base is vibrated at the frequency of the measured AC voltage (or current) by means of an electromagnet. Whichever tuning fork is closest in resonant frequency to the frequency of that vibration will tend to shake the most (or the loudest). If the forks’ tines were flimsy enough, we could see the relative motion of each by the length of the blur we would see as we inspected each one from an end-view perspective. Well, make a collection of “tuning forks” out of a strip of sheet metal cut in a pattern akin to a rake, and you have the vibrating reed frequency meter: (Figure below) Vibrating reed frequency meter diagram. The user of this meter views the ends of all those unequal length reeds as they are collectively shaken at the frequency of the applied AC voltage to the coil. The one closest in resonant frequency to the applied AC will vibrate the most, looking something like Figure below. Vibrating reed frequency meter front panel. Vibrating reed meters, obviously, are not precision instruments, but they are very simple and therefore easy to manufacture to be rugged. They are often found on small engine-driven generator sets for the purpose of setting engine speed so that the frequency is somewhat close to 60 (50 in Europe) Hertz. While reed-type meters are imprecise, their operational principle is not. In lieu of mechanical resonance, we may substitute electrical resonance and design a frequency meter using an inductor and capacitor in the form of a tank circuit (parallel inductor and capacitor). See Figure below. One or both components are made adjustable, and a meter is placed in the circuit to indicate maximum amplitude of voltage across the two components. The adjustment knob(s) are calibrated to show resonant frequency for any given setting, and the frequency is read from them after the device has been adjusted for maximum indication on the meter. Essentially, this is a tunable filter circuit which is adjusted and then read in a manner similar to a bridge circuit (which must be balanced for a “null” condition and then read). Resonant frequency meter “peaks” as L-C resonant frequency is tuned to test frequency. This technique is a popular one for amateur radio operators (or at least it was before the advent of inexpensive digital frequency instruments called counters), especially because it doesn’t require direct connection to the circuit. So long as the inductor and/or capacitor can intercept enough stray field (magnetic or electric, respectively) from the circuit under test to cause the meter to indicate, it will work. In frequency as in other types of electrical measurement, the most accurate means of measurement are usually those where an unknown quantity is compared against a known standard, the basic instrument doing nothing more than indicating when the two quantities are equal to each other. This is the basic principle behind the DC (Wheatstone) bridge circuit and it is a sound metrological principle applied throughout the sciences. If we have access to an accurate frequency standard (a source of AC voltage holding very precisely to a single frequency), then measurement of any unknown frequency by comparison should be relatively easy. For that frequency standard, we turn our attention back to the tuning fork, or at least a more modern variation of it called the quartz crystal. Quartz is a naturally occurring mineral possessing a very interesting property called piezoelectricity. Piezoelectric materials produce a voltage across their length when physically stressed, and will physically deform when an external voltage is applied across their lengths. This deformation is very, very slight in most cases, but it does exist. Quartz rock is elastic (springy) within that small range of bending which an external voltage would produce, which means that it will have a mechanical resonant frequency of its own capable of being manifested as an electrical voltage signal. In other words, if a chip of quartz is struck, it will “ring” with its own unique frequency determined by the length of the chip, and that resonant oscillation will produce an equivalent voltage across multiple points of the quartz chip which can be tapped into by wires fixed to the surface of the chip. In reciprocal manner, the quartz chip will tend to vibrate most when it is “excited” by an applied AC voltage at precisely the right frequency, just like the reeds on a vibrating-reed frequency meter. Chips of quartz rock can be precisely cut for desired resonant frequencies, and that chip mounted securely inside a protective shell with wires extending for connection to an external electric circuit. When packaged as such, the resulting device is simply called a crystal (or sometimes “xtal”). The schematic symbol is shown in Figure below. Crystal (frequency determing element) schematic symbol. Electrically, that quartz chip is equivalent to a series LC resonant circuit. (Figure below) The dielectric properties of quartz contribute an additional capacitive element to the equivalent circuit. Quartz crystal equivalent circuit. The “capacitance” and “inductance” shown in series are merely electrical equivalents of the quartz’s mechanical resonance properties: they do not exist as discrete components within the crystal. The capacitance shown in parallel due to the wire connections across the dielectric (insulating) quartz body is real, and it has an effect on the resonant response of the whole system. A full discussion on crystal dynamics is not necessary here, but what needs to be understood about crystals is this resonant circuit equivalence and how it can be exploited within an oscillator circuit to achieve an output voltage with a stable, known frequency. Crystals, as resonant elements, typically have much higher “Q” (quality) values than tank circuits built from inductors and capacitors, principally due to the relative absence of stray resistance, making their resonant frequencies very definite and precise. Because the resonant frequency is solely dependent on the physical properties of quartz (a very stable substance, mechanically), the resonant frequency variation over time with a quartz crystal is very, very low. This is how quartz movement watches obtain their high accuracy: by means of an electronic oscillator stabilized by the resonant action of a quartz crystal. For laboratory applications, though, even greater frequency stability may be desired. To achieve this, the crystal in question may be placed in a temperature stabilized environment (usually an oven), thus eliminating frequency errors due to thermal expansion and contraction of the quartz. For the ultimate in a frequency standard though, nothing discovered thus far surpasses the accuracy of a single resonating atom. This is the principle of the so-called atomic clock, which uses an atom of mercury (or cesium) suspended in a vacuum, excited by outside energy to resonate at its own unique frequency. The resulting frequency is detected as a radio-wave signal and that forms the basis for the most accurate clocks known to humanity. National standards laboratories around the world maintain a few of these hyper-accurate clocks, and broadcast frequency signals based on those atoms’ vibrations for scientists and technicians to tune in and use for frequency calibration purposes. Now we get to the practical part: once we have a source of accurate frequency, how do we compare that against an unknown frequency to obtain a measurement? One way is to use a CRT as a frequency-comparison device. Cathode Ray Tubes typically have means of deflecting the electron beam in the horizontal as well as the vertical axis. If metal plates are used to electrostatically deflect the electrons, there will be a pair of plates to the left and right of the beam as well as a pair of plates above and below the beam as in Figure below. Cathode ray tube (CRT) with vertical and horizontal deflection plates. If we allow one AC signal to deflect the beam up and down (connect that AC voltage source to the “vertical” deflection plates) and another AC signal to deflect the beam left and right (using the other pair of deflection plates), patterns will be produced on the screen of the CRT indicative of the ratio of these two AC frequencies. These patterns are called Lissajous figures and are a common means of comparative frequency measurement in electronics. If the two frequencies are the same, we will obtain a simple figure on the screen of the CRT, the shape of that figure being dependent upon the phase shift between the two AC signals. Here is a sampling of Lissajous figures for two sine-wave signals of equal frequency, shown as they would appear on the face of an oscilloscope (an AC voltage-measuring instrument using a CRT as its “movement”). The first picture is of the Lissajous figure formed by two AC voltages perfectly in phase with each other: (Figure below) Lissajous figure: same frequency, zero degrees phase shift. If the two AC voltages are not in phase with each other, a straight line will not be formed. Rather, the Lissajous figure will take on the appearance of an oval, becoming perfectly circular if the phase shift is exactly 90 o between the two signals, and if their amplitudes are equal: (Figure below) Lissajous figure: same frequency, 90 or 270 degrees phase shift. Finally, if the two AC signals are directly opposing one another in phase (180o shift), we will end up with a line again, only this time it will be oriented in the opposite direction: (Figure below) Lissajous figure: same frequency, 180 degrees phase shift. When we are faced with signal frequencies that are not the same, Lissajous figures get quite a bit more complex. Consider the following examples and their given vertical/horizontal frequency ratios: (Figure below) Lissajous figure: Horizontal frequency is twice that of vertical. The more complex the ratio between horizontal and vertical frequencies, the more complex the Lissajous figure. Consider the following illustration of a 3:1 frequency ratio between horizontal and vertical: (Figure below) Lissajous figure: Horizontal frequency is three times that of vertical. . . . and a 3:2 frequency ratio (horizontal = 3, vertical = 2) in Figure below. Lissajous figure: Horizontal/vertical frequency ratio is 3:2. In cases where the frequencies of the two AC signals are not exactly a simple ratio of each other (but close), the Lissajous figure will appear to “move,” slowly changing orientation as the phase angle between the two waveforms rolls between 0o and 180o. If the two frequencies are locked in an exact integer ratio between each other, the Lissajous figure will be stable on the viewscreen of the CRT. The physics of Lissajous figures limits their usefulness as a frequency-comparison technique to cases where the frequency ratios are simple integer values (1:1, 1:2, 1:3, 2:3, 3:4, etc.). Despite this limitation, Lissajous figures are a popular means of frequency comparison wherever an accessible frequency standard (signal generator) exists. Review • Some frequency meters work on the principle of mechanical resonance, indicating frequency by relative oscillation among a set of uniquely tuned “reeds” shaken at the measured frequency. • Other frequency meters use electric resonant circuits (LC tank circuits, usually) to indicate frequency. One or both components is made to be adjustable, with an accurately calibrated adjustment knob, and a sensitive meter is read for maximum voltage or current at the point of resonance. • Frequency can be measured in a comparative fashion, as is the case when using a CRT to generate Lissajous figures. Reference frequency signals can be made with a high degree of accuracy by oscillator circuits using quartz crystals as resonant devices. For ultra precision, atomic clock signal standards (based on the resonant frequencies of individual atoms) can be used.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/12%3A_AC_Metering_Circuits/12.02%3A_Frequency_and_Phase_Measurement.txt
Power measurement in AC circuits can be quite a bit more complex than with DC circuits for the simple reason that phase shift complicates the matter beyond multiplying voltage by current figures obtained with meters. What is needed is an instrument able to determine the product (multiplication) of instantaneous voltage and current. Fortunately, the common electrodynamometer movement with its stationary and moving coil does a fine job of this. Three phase power measurement can be accomplished using two dynamometer movements with a common shaft linking the two moving coils together so that a single pointer registers power on a meter movement scale. This, obviously, makes for a rather expensive and complex movement mechanism, but it is a workable solution. An ingenious method of deriving an electronic power meter (one that generates an electric signal representing power in the system rather than merely move a pointer) is based on the Hall effect. The Hall effect is an unusual effect first noticed by E. H. Hall in 1879, whereby a voltage is generated along the width of a current-carrying conductor exposed to a perpendicular magnetic field: (Figure below) Hall effect: Voltage is proportional to current and strength of the perpendicular magnetic field. The voltage generated across the width of the flat, rectangular conductor is directly proportional to both the magnitude of the current through it and the strength of the magnetic field. Mathematically, it is a product (multiplication) of these two variables. The amount of “Hall Voltage” produced for any given set of conditions also depends on the type of material used for the flat, rectangular conductor. It has been found that specially prepared “semiconductor” materials produce a greater Hall voltage than do metals, and so modern Hall Effect devices are made of these. It makes sense then that if we were to build a device using a Hall-effect sensor where the current through the conductor was pushed by AC voltage from an external circuit and the magnetic field was set up by a pair or wire coils energized by the current of the AC power circuit, the Hall voltage would be in direct proportion to the multiple of circuit current and voltage. Having no mass to move (unlike an electromechanical movement), this device is able to provide instantaneous power measurement: (Figure below) Hall effect power sensor measures instantaneous power. Not only will the output voltage of the Hall effect device be the representation of instantaneous power at any point in time, but it will also be a DC signal! This is because the Hall voltage polarity is dependent upon boththe polarity of the magnetic field and the direction of current through the conductor. If both current direction and magnetic field polarity reverses—as it would ever half-cycle of the AC power—the output voltage polarity will stay the same. If voltage and current in the power circuit are 90o out of phase (a power factor of zero, meaning no real power delivered to the load), the alternate peaks of Hall device current and magnetic field will never coincide with each other: when one is at its peak, the other will be zero. At those points in time, the Hall output voltage will likewise be zero, being the product (multiplication) of current and magnetic field strength. Between those points in time, the Hall output voltage will fluctuate equally between positive and negative, generating a signal corresponding to the instantaneous absorption and release of power through the reactive load. The net DC output voltage will be zero, indicating zero true power in the circuit. Any phase shift between voltage and current in the power circuit less than 90o will result in a Hall output voltage that oscillates between positive and negative, but spends more time positive than negative. Consequently there will be a net DC output voltage. Conditioned through a low-pass filter circuit, this net DC voltage can be separated from the AC mixed with it, the final output signal registered on a sensitive DC meter movement. Often it is useful to have a meter to totalize power usage over a period of time rather than instantaneously. The output of such a meter can be set in units of Joules, or total energy consumed, since power is a measure of work being done per unit time. Or, more commonly, the output of the meter can be set in units of Watt-Hours.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/12%3A_AC_Metering_Circuits/12.03%3A_Power_Measurement.txt
It used to be with large AC power systems that “power quality” was an unheard-of concept, aside from power factor. Almost all loads were of the “linear” variety, meaning that they did not distort the shape of the voltage sine wave, or cause non-sinusoidal currents to flow in the circuit. This is not true anymore. Loads controlled by “nonlinear” electronic components are becoming more prevalent in both home and industry, meaning that the voltages and currents in the power system(s) feeding these loads are rich in harmonics: what should be nice, clean sine-wave voltages and currents are becoming highly distorted, which is equivalent to the presence of an infinite series of high-frequency sine waves at multiples of the fundamental power line frequency. Excessive harmonics in an AC power system can overheat transformers, cause exceedingly high neutral conductor currents in three-phase systems, create electromagnetic “noise” in the form of radio emissions that can interfere with sensitive electronic equipment, reduce electric motor horsepower output, and can be difficult to pinpoint. With problems like these plaguing power systems, engineers and technicians require ways to precisely detect and measure these conditions. Power Quality is the general term given to represent an AC power system’s freedom from harmonic content. A “power quality” meter is one that gives some form of harmonic content indication. A simple way for a technician to determine power quality in their system without sophisticated equipment is to compare voltage readings between two accurate voltmeters measuring the same system voltage: one meter being an “averaging” type of unit (such as an electromechanical movement meter) and the other being a “true-RMS” type of unit (such as a high-quality digital meter). Remember that “averaging” type meters are calibrated so that their scales indicate volts RMS, based on the assumption that the AC voltage being measured is sinusoidal. If the voltage is anything but sinewave-shaped, the averaging meter will not register the proper value, whereas the true-RMS meter always will, regardless of waveshape. The rule of thumb here is this: the greater the disparity between the two meters, the worse the power quality is, and the greater its harmonic content. A power system with good quality power should generate equal voltage readings between the two meters, to within the rated error tolerance of the two instruments. Another qualitative measurement of power quality is the oscilloscope test: connect an oscilloscope (CRT) to the AC voltage and observe the shape of the wave. Anything other than a clean sine wave could be an indication of trouble: (Figure below) This is a moderately ugly “sine” wave. Definite harmonic content here! Still, if quantitative analysis (definite, numerical figures) is necessary, there is no substitute for an instrument specifically designed for that purpose. Such an instrument is called a power quality meter and is sometimes better known in electronic circles as a low-frequency spectrum analyzer. What this instrument does is provide a graphical representation on a CRT or digital display screen of the AC voltage’s frequency “spectrum.” Just as a prism splits a beam of white light into its constituent color components (how much red, orange, yellow, green, and blue is in that light), the spectrum analyzer splits a mixed-frequency signal into its constituent frequencies, and displays the result in the form of a histogram: (Figure below) Power quality meter is a low frequency spectrum analyzer. Each number on the horizontal scale of this meter represents a harmonic of the fundamental frequency. For American power systems, the “1” represents 60 Hz (the 1st harmonic, or fundamental), the “3” for 180 Hz (the 3rd harmonic), the “5” for 300 Hz (the 5th harmonic), and so on. The black rectangles represent the relative magnitudes of each of these harmonic components in the measured AC voltage. A pure, 60 Hz sine wave would show only a tall black bar over the “1” with no black bars showing at all over the other frequency markers on the scale, because a pure sine wave has no harmonic content. Power quality meters such as this might be better referred to as overtone meters, because they are designed to display only those frequencies known to be generated by the power system. In three-phase AC power systems (predominant for large power applications), even-numbered harmonics tend to be canceled out, and so only harmonics existing in significant measure are the odd-numbered.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/12%3A_AC_Metering_Circuits/12.04%3A_Power_Quality_Measurement.txt
As we saw with DC measurement circuits, the circuit configuration known as a bridge can be a very useful way to measure unknown values of resistance. This is true with AC as well, and we can apply the very same principle to the accurate measurement of unknown impedances. To review, the bridge circuit works as a pair of two-component voltage dividers connected across the same source voltage, with a null-detector meter movement connected between them to indicate a condition of “balance” at zero volts: (Figure below) A balanced bridge shows a “null”, or minimum reading, on the indicator. Any one of the four resistors in the above bridge can be the resistor of unknown value, and its value can be determined by a ratio of the other three, which are “calibrated,” or whose resistances are known to a precise degree. When the bridge is in a balanced condition (zero voltage as indicated by the null detector), the ratio works out to be this: One of the advantages of using a bridge circuit to measure resistance is that the voltage of the power source is irrelevant. Practically speaking, the higher the supply voltage, the easier it is to detect a condition of imbalance between the four resistors with the null detector, and thus the more sensitive it will be. A greater supply voltage leads to the possibility of increased measurement precision. However, there will be no fundamental error introduced as a result of a lesser or greater power supply voltage unlike other types of resistance measurement schemes. Impedance bridges work the same, only the balance equation is with complex quantities, as both magnitude and phase across the components of the two dividers must be equal in order for the null detector to indicate “zero.” The null detector, of course, must be a device capable of detecting very small AC voltages. An oscilloscope is often used for this, although very sensitive electromechanical meter movements and even headphones (small speakers) may be used if the source frequency is within audio range. One way to maximize the effectiveness of audio headphones as a null detector is to connect them to the signal source through an impedance-matching transformer. Headphone speakers are typically low-impedance units (8 Ω), requiring substantial current to drive, and so a step-down transformer helps “match” low-current signals to the impedance of the headphone speakers. An audio output transformer works well for this purpose: (Figure below) “Modern” low-Ohm headphones require an impedance matching transformer for use as a sensitive null detector. Using a pair of headphones that completely surround the ears (the “closed-cup” type), I’ve been able to detect currents of less than 0.1 µA with this simple detector circuit. Roughly equal performance was obtained using two different step-down transformers: a small power transformer (120/6 volt ratio), and an audio output transformer (1000:8 ohm impedance ratio). With the pushbutton switch in place to interrupt current, this circuit is usable for detecting signals from DC to over 2 MHz: even if the frequency is far above or below the audio range, a “click” will be heard from the headphones each time the switch is pressed and released. Connected to a resistive bridge, the whole circuit looks like Figure below. Bridge with sensitive AC null detector. Listening to the headphones as one or more of the resistor “arms” of the bridge is adjusted, a condition of balance will be realized when the headphones fail to produce “clicks” (or tones, if the bridge’s power source frequency is within audio range) as the switch is actuated. When describing general AC bridges, where impedances and not just resistances must be in proper ratio for balance, it is sometimes helpful to draw the respective bridge legs in the form of box-shaped components, each one with a certain impedance: (Figure below) Generalized AC impedance bridge: Z = nonspecific complex impedance. For this general form of AC bridge to balance, the impedance ratios of each branch must be equal: Again, it must be stressed that the impedance quantities in the above equation must be complex, accounting for both magnitude and phase angle. It is insufficient that the impedance magnitudes alone be balanced; without phase angles in balance as well, there will still be voltage across the terminals of the null detector and the bridge will not be balanced. Bridge circuits can be constructed to measure just about any device value desired, be it capacitance, inductance, resistance, or even “Q.” As always in bridge measurement circuits, the unknown quantity is always “balanced” against a known standard, obtained from a high-quality, calibrated component that can be adjusted in value until the null detector device indicates a condition of balance. Depending on how the bridge is set up, the unknown component’s value may be determined directly from the setting of the calibrated standard, or derived from that standard through a mathematical formula. A couple of simple bridge circuits are shown below, one for inductance (Figure below) and one for capacitance: (Figure below) Symmetrical bridge measures unknown inductor by comparison to a standard inductor. Symmetrical bridge measures unknown capacitor by comparison to a standard capacitor. Simple “symmetrical” bridges such as these are so named because they exhibit symmetry (mirror-image similarity) from left to right. The two bridge circuits shown above are balanced by adjusting the calibrated reactive component (Ls or Cs). They are a bit simplified from their real-life counterparts, as practical symmetrical bridge circuits often have a calibrated, variable resistor in series or parallel with the reactive component to balance out stray resistance in the unknown component. But, in the hypothetical world of perfect components, these simple bridge circuits do just fine to illustrate the basic concept. An example of a little extra complexity added to compensate for real-world effects can be found in the so-called Wien bridge, which uses a parallel capacitor-resistor standard impedance to balance out an unknown series capacitor-resistor combination. (Figure below) All capacitors have some amount of internal resistance, be it literal or equivalent (in the form of dielectric heating losses) which tend to spoil their otherwise perfectly reactive natures. This internal resistance may be of interest to measure, and so the Wien bridge attempts to do so by providing a balancing impedance that isn’t “pure” either: Wein Bridge measures both capacitive Cx and resistive Rx components of “real” capacitor. Being that there are two standard components to be adjusted (a resistor and a capacitor) this bridge will take a little more time to balance than the others we’ve seen so far. The combined effect of Rs and Cs is to alter the magnitude and phase angle until the bridge achieves a condition of balance. Once that balance is achieved, the settings of Rs and Cs can be read from their calibrated knobs, the parallel impedance of the two determined mathematically, and the unknown capacitance and resistance determined mathematically from the balance equation (Z1/Z2 = Z3/Z4). It is assumed in the operation of the Wien bridge that the standard capacitor has negligible internal resistance, or at least that resistance is already known so that it can be factored into the balance equation. Wien bridges are useful for determining the values of “lossy” capacitor designs like electrolytics, where the internal resistance is relatively high. They are also used as frequency meters, because the balance of the bridge is frequency-dependent. When used in this fashion, the capacitors are made fixed (and usually of equal value) and the top two resistors are made variable and are adjusted by means of the same knob. An interesting variation on this theme is found in the next bridge circuit, used to precisely measure inductances. Maxwell-Wein bridge measures an inductor in terms of a capacitor standard. This ingenious bridge circuit is known as the Maxwell-Wien bridge (sometimes known plainly as the Maxwell bridge), and is used to measure unknown inductances in terms of calibrated resistance and capacitance. (Figure above) Calibration-grade inductors are more difficult to manufacture than capacitors of similar precision, and so the use of a simple “symmetrical” inductance bridge is not always practical. Because the phase shifts of inductors and capacitors are exactly opposite each other, a capacitive impedance can balance out an inductive impedance if they are located in opposite legs of a bridge, as they are here. Another advantage of using a Maxwell bridge to measure inductance rather than a symmetrical inductance bridge is the elimination of measurement error due to mutual inductance between two inductors. Magnetic fields can be difficult to shield, and even a small amount of coupling between coils in a bridge can introduce substantial errors in certain conditions. With no second inductor to react with in the Maxwell bridge, this problem is eliminated. For easiest operation, the standard capacitor (Cs) and the resistor in parallel with it (Rs) are made variable, and both must be adjusted to achieve balance. However, the bridge can be made to work if the capacitor is fixed (non-variable) and more than one resistor made variable (at least the resistor in parallel with the capacitor, and one of the other two). However, in the latter configuration it takes more trial-and-error adjustment to achieve balance, as the different variable resistors interact in balancing magnitude and phase. Unlike the plain Wien bridge, the balance of the Maxwell-Wien bridge is independent of source frequency, and in some cases this bridge can be made to balance in the presence of mixed frequencies from the AC voltage source, the limiting factor being the inductor’s stability over a wide frequency range. There are more variations beyond these designs, but a full discussion is not warranted here. General-purpose impedance bridge circuits are manufactured which can be switched into more than one configuration for maximum flexibility of use. A potential problem in sensitive AC bridge circuits is that of stray capacitance between either end of the null detector unit and ground (earth) potential. Because capacitances can “conduct” alternating current by charging and discharging, they form stray current paths to the AC voltage source which may affect bridge balance: (Figure below) Stray capacitance to ground may introduce errors into the bridge. While reed-type meters are imprecise, their operational principle is not. In lieu of mechanical resonance, we may substitute electrical resonance and design a frequency meter using an inductor and capacitor in the form of a tank circuit (parallel inductor and capacitor). One or both components are made adjustable, and a meter is placed in the circuit to indicate maximum amplitude of voltage across the two components. The adjustment knob(s) are calibrated to show resonant frequency for any given setting, and the frequency is read from them after the device has been adjusted for maximum indication on the meter. Essentially, this is a tunable filter circuit which is adjusted and then read in a manner similar to a bridge circuit (which must be balanced for a “null” condition and then read). The problem is worsened if the AC voltage source is firmly grounded at one end, the total stray impedance for leakage currents made far less and any leakage currents through these stray capacitances made greater as a result: (Figure below) Stray capacitance errors are more severe if one side of the AC supply is grounded. One way of greatly reducing this effect is to keep the null detector at ground potential, so there will be no AC voltage between it and the ground, and thus no current through stray capacitances. However, directly connecting the null detector to ground is not an option, as it would create a direct current path for stray currents, which would be worse than any capacitive path. Instead, a special voltage divider circuit called a Wagner ground or Wagner earth may be used to maintain the null detector at ground potential without the need for a direct connection to the null detector. (Figure below) Wagner ground for AC supply minimizes the effects of stray capacitance to ground on the bridge. The Wagner earth circuit is nothing more than a voltage divider, designed to have the voltage ratio and phase shift as each side of the bridge. Because the midpoint of the Wagner divider is directly grounded, any other divider circuit (including either side of the bridge) having the same voltage proportions and phases as the Wagner divider, and powered by the same AC voltage source, will be at ground potential as well. Thus, the Wagner earth divider forces the null detector to be at ground potential, without a direct connection between the detector and ground. There is often a provision made in the null detector connection to confirm proper setting of the Wagner earth divider circuit: a two-position switch, (Figure below) so that one end of the null detector may be connected to either the bridge or the Wagner earth. When the null detector registers zero signal in both switch positions, the bridge is not only guaranteed to be balanced, but the null detector is also guaranteed to be at zero potential with respect to ground, thus eliminating any errors due to leakage currents through stray detector-to-ground capacitances: Switch-up position allows adjustment of the Wagner ground. Review • AC bridge circuits work on the same basic principle as DC bridge circuits: that a balanced ratio of impedances (rather than resistances) will result in a “balanced” condition as indicated by the null-detector device. • Null detectors for AC bridges may be sensitive electromechanical meter movements, oscilloscopes (CRT’s), headphones (amplified or unamplified), or any other device capable of registering very small AC voltage levels. Like DC null detectors, its only required point of calibration accuracy is at zero. • AC bridge circuits can be of the “symmetrical” type where an unknown impedance is balanced by a standard impedance of similar type on the same side (top or bottom) of the bridge. Or, they can be “nonsymmetrical,” using parallel impedances to balance series impedances, or even capacitances balancing out inductances. • AC bridge circuits often have more than one adjustment, since both impedance magnitude and phase angle must be properly matched to balance. • Some impedance bridge circuits are frequency-sensitive while others are not. The frequency-sensitive types may be used as frequency measurement devices if all component values are accurately known. • A Wagner earth or Wagner ground is a voltage divider circuit added to AC bridges to help reduce errors due to stray capacitance coupling the null detector to ground.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/12%3A_AC_Metering_Circuits/12.05%3A_AC_Bridge_Circuits.txt
Just as devices have been made to measure certain physical quantities and repeat that information in the form of DC electrical signals (thermocouples, strain gauges, pH probes, etc.), special devices have been made that do the same with AC. It is often necessary to be able to detect and transmit the physical position of mechanical parts via electrical signals. This is especially true in the fields of automated machine tool control and robotics. A simple and easy way to do this is with a potentiometer: (Figure below) Potentiometer tap voltage indicates position of an object slaved to the shaft. However, potentiometers have their own unique problems. For one, they rely on physical contact between the “wiper” and the resistance strip, which means they suffer the effects of physical wear over time. As potentiometers wear, their proportional output versus shaft position becomes less and less certain. You might have already experienced this effect when adjusting the volume control on an old radio: when twisting the knob, you might hear “scratching” sounds coming out of the speakers. Those noises are the result of poor wiper contact in the volume control potentiometer. Also, this physical contact between wiper and strip creates the possibility of arcing (sparking) between the two as the wiper is moved. With most potentiometer circuits, the current is so low that wiper arcing is negligible, but it is a possibility to be considered. If the potentiometer is to be operated in an environment where combustible vapor or dust is present, this potential for arcing translates into a potential for an explosion! Using AC instead of DC, we are able to completely avoid sliding contact between parts if we use a variable transformer instead of a potentiometer. Devices made for this purpose are called LVDT’s, which stands for Linear Variable Differential Transformers. The design of an LVDT looks like this: (Figure below) AC output of linear variable differential transformer (LVDT) indicates core position. Obviously, this device is a transformer: it has a single primary winding powered by an external source of AC voltage, and two secondary windings connected in series-bucking fashion. It is variable because the core is free to move between the windings. It is differential because of the way the two secondary windings are connected. Being arranged to oppose each other (180o out of phase) means that the output of this device will be the difference between the voltage output of the two secondary windings. When the core is centered and both windings are outputting the same voltage, the net result at the output terminals will be zero volts. It is called linear because the core’s freedom of motion is straight-line. The AC voltage output by an LVDT indicates the position of the movable core. Zero volts means that the core is centered. The further away the core is from center position, the greater percentage of input (“excitation”) voltage will be seen at the output. The phase of the output voltage relative to the excitation voltage indicates which direction from center the core is offset. The primary advantage of an LVDT over a potentiometer for position sensing is the absence of physical contact between the moving and stationary parts. The core does not contact the wire windings, but slides in and out within a nonconducting tube. Thus, the LVDT does not “wear” like a potentiometer, nor is there the possibility of creating an arc. Excitation of the LVDT is typically 10 volts RMS or less, at frequencies ranging from power line to the high audio (20 kHz) range. One potential disadvantage of the LVDT is its response time, which is mostly dependent on the frequency of the AC voltage source. If very quick response times are desired, the frequency must be higher to allow whatever voltage-sensing circuits enough cycles of AC to determine voltage level as the core is moved. To illustrate the potential problem here, imagine this exaggerated scenario: an LVDT powered by a 60 Hz voltage source, with the core being moved in and out hundreds of times per second. The output of this LVDT wouldn’t even look like a sine wave because the core would be moved throughout its range of motion before the AC source voltage could complete a single cycle! It would be almost impossible to determine instantaneous core position if it moves faster than the instantaneous source voltage does. A variation on the LVDT is the RVDT, or Rotary Variable Differential Transformer. This device works on almost the same principle, except that the core revolves on a shaft instead of moving in a straight line. RVDT’s can be constructed for limited motion of 360o (full-circle) motion. Continuing with this principle, we have what is known as a Synchro or Selsyn, which is a device constructed a lot like a wound-rotor polyphase AC motor or generator. The rotor is free to revolve a full 360o, just like a motor. On the rotor is a single winding connected to a source of AC voltage, much like the primary winding of an LVDT. The stator windings are usually in the form of a three-phase Y, although synchros with more than three phases have been built. (Figure below) A device with a two-phase stator is known as a resolver. A resolver produces sine and cosine outputs which indicate shaft position. A synchro is wound with a three-phase stator winding, and a rotating field. A resolver has a two-phase stator. Voltages induced in the stator windings from the rotor’s AC excitation are not phase-shifted by 120o as in a real three-phase generator. If the rotor were energized with DC current rather than AC and the shaft spun continuously, then the voltages would be true three-phase. But this is not how a synchro is designed to be operated. Rather, this is a position-sensing device much like an RVDT, except that its output signal is much more definite. With the rotor energized by AC, the stator winding voltages will be proportional in magnitude to the angular position of the rotor, phase either 0o or 180o shifted, like a regular LVDT or RVDT. You could think of it as a transformer with one primary winding and three secondary windings, each secondary winding oriented at a unique angle. As the rotor is slowly turned, each winding in turn will line up directly with the rotor, producing full voltage, while the other windings will produce something less than full voltage. Synchros are often used in pairs. With their rotors connected in parallel and energized by the same AC voltage source, their shafts will match position to a high degree of accuracy: (Figure below) Synchro shafts are slaved to each other. Rotating one moves the other. Such “transmitter/receiver” pairs have been used on ships to relay rudder position, or to relay navigational gyro position over fairly long distances. The only difference between the “transmitter” and the “receiver” is which one gets turned by an outside force. The “receiver” can just as easily be used as the “transmitter” by forcing its shaft to turn and letting the synchro on the left match position. If the receiver’s rotor is left unpowered, it will act as a position-error detector, generating an AC voltage at the rotor if the shaft is anything other than 90o or 270o shifted from the shaft position of the transmitter. The receiver rotor will no longer generate any torque and consequently will no longer automatically match position with the transmitter’s: (Figure below) AC voltmeter registers voltage if the receiver rotor is not rotated exactly 90 or 270 degrees from the transmitter rotor. This can be thought of almost as a sort of bridge circuit that achieves balance only if the receiver shaft is brought to one of two (matching) positions with the transmitter shaft. One rather ingenious application of the synchro is in the creation of a phase-shifting device, provided that the stator is energized by three-phase AC: (Figure below) Full rotation of the rotor will smoothly shift the phase from 0o all the way to 360o (back to 0o). As the synchro’s rotor is turned, the rotor coil will progressively align with each stator coil, their respective magnetic fields being 120o phase-shifted from one another. In between those positions, these phase-shifted fields will mix to produce a rotor voltage somewhere between 0o, 120o, or 240o shift. The practical result is a device capable of providing an infinitely variable-phase AC voltage with the twist of a knob (attached to the rotor shaft). A synchro or a resolver may measure linear motion if geared with a rack and pinion mechanism. A linear movement of a few inches (or cm) resulting in multiple revolutions of the synchro (resolver) generates a train of sinewaves. An Inductosyn® is a linear version of the resolver. It outputs signals like a resolver; though, it bears slight resemblance. The Inductosyn consists of two parts: a fixed serpentine winding having a 0.1 in or 2 mm pitch, and a movable winding known as a slider. (Figure below) The slider has a pair of windings having the same pitch as the fixed winding. The slider windings are offset by a quarter pitch so both sine and cosine waves are produced by movement. One slider winding is adequate for counting pulses, but provides no direction information. The 2-phase windings provide direction information in the phasing of the sine and cosine waves. Movement by one pitch produces a cycle of sine and cosine waves; multiple pitches produce a train of waves. Inductosyn: (a) Fixed serpentine winding, (b) movable slider 2-phase windings. Adapted from Figure 6.16 When we say sine and cosine waves are produces as a function of linear movement, we really mean a high frequency carrier is amplitude modulated as the slider moves. The two slider AC signals must be measured to determine position within a pitch, the fine position. How many pitches has the slider moved? The sine and cosine signals’ relationship does not reveal that. However, the number of pitches (number of waves) may be counted from a known starting point yielding coarse position. This is an incremental encoder. If absolute position must be known regardless of the starting point, an auxiliary resolver geared for one revolution per length gives a coarse position. This constitutes an absolute encoder. A linear Inductosyn has a transformer ratio of 100:1. Compare this to the 1:1 ratio for a resolver. A few volts AC excitation into an Inductosyn yields a few millivolts out. This low signal level is converted to to a 12-bit digital format by a resolver to digital converter (RDC). Resolution of 25 microinches is achievable. There is also a rotary version of the Inductosyn having 360 pattern pitches per revolution. When used with a 12-bit resolver to digital converter, better that 1 arc second resolution is achievable. This is an incremental encoder. Counting of pitches from a known starting point is necessary to determine absolute position. Alternatively, a resolver may determine coarse absolute position. So far the transducers discussed have all been of the inductive variety. However, it is possible to make transducers which operate on variable capacitance as well, AC being used to sense the change in capacitance and generate a variable output voltage. Remember that the capacitance between two conductive surfaces varies with three major factors: the overlapping area of those two surfaces, the distance between them, and the dielectric constant of the material in between the surfaces. If two out of three of these variables can be fixed (stabilized) and the third allowed to vary, then any measurement of capacitance between the surfaces will be solely indicative of changes in that third variable. Medical researchers have long made use of capacitive sensing to detect physiological changes in living bodies. As early as 1907, a German researcher named H. Cremer placed two metal plates on either side of a beating frog heart and measured the capacitance changes resulting from the heart alternately filling and emptying itself of blood. Similar measurements have been performed on human beings with metal plates placed on the chest and back, recording respiratory and cardiac action by means of capacitance changes. For more precise capacitive measurements of organ activity, metal probes have been inserted into organs (especially the heart) on the tips of catheter tubes, capacitance being measured between the metal probe and the body of the subject. With a sufficiently high AC excitation frequency and sensitive enough voltage detector, not just the pumping action but also the sounds of the active heart may be readily interpreted. Like inductive transducers, capacitive transducers can also be made to be self-contained units, unlike the direct physiological examples described above. Some transducers work by making one of the capacitor plates movable, either in such a way as to vary the overlapping area or the distance between the plates. Other transducers work by moving a dielectric material in and out between two fixed plates: (Figure below) Variable capacitive transducer varies; (a) area of overlap, (b) distance between plates, (c) amount of dielectric between plates. Transducers with greater sensitivity and immunity to changes in other variables can be obtained by way of differential design, much like the concept behind the LVDT (Linear Variable Differential Transformer). Here are a few examples of differential capacitive transducers: (Figure below) Differential capacitive transducer varies capacitance ratio by changing: (a) area of overlap, (b) distance between plates, (c) dielectric between plates. As you can see, all of the differential devices shown in the above illustration have three wire connections rather than two: one wire for each of the “end” plates and one for the “common” plate. As the capacitance between one of the “end” plates and the “common” plate changes, the capacitance between the other “end” plate and the “common” plate is such to change in the opposite direction. This kind of transducer lends itself very well to implementation in a bridge circuit: (Figure below) Differential capacitive transducer bridge measurement circuit. Capacitive transducers provide relatively small capacitances for a measurement circuit to operate with, typically in the picofarad range. Because of this, high power supply frequencies (in the megahertz range!) are usually required to reduce these capacitive reactances to reasonable levels. Given the small capacitances provided by typical capacitive transducers, stray capacitances have the potential of being major sources of measurement error. Good conductor shielding is essential for reliable and accurate capacitive transducer circuitry! The bridge circuit is not the only way to effectively interpret the differential capacitance output of such a transducer, but it is one of the simplest to implement and understand. As with the LVDT, the voltage output of the bridge is proportional to the displacement of the transducer action from its center position, and the direction of offset will be indicated by phase shift. This kind of bridge circuit is similar in function to the kind used with strain gauges: it is not intended to be in a “balanced” condition all the time, but rather the degree of imbalance represents the magnitude of the quantity being measured. An interesting alternative to the bridge circuit for interpreting differential capacitance is the twin-T. It requires the use of diodes, those “one-way valves” for electric current mentioned earlier in the chapter: (Figure below) Differential capacitive transducer “Twin-T” measurement circuit. This circuit might be better understood if re-drawn to resemble more of a bridge configuration: (Figure below) Differential capacitor transducer “Twin-T” measurement circuit redrawn as a bridge.Output is across Rload. Capacitor C1 is charged by the AC voltage source during every positive half-cycle (positive as measured in reference to the ground point), while C2 is charged during every negative half-cycle. While one capacitor is being charged, the other capacitor discharges (at a slower rate than it was charged) through the three-resistor network. As a consequence, C1 maintains a positive DC voltage with respect to ground, and C2 a negative DC voltage with respect to ground. If the capacitive transducer is displaced from center position, one capacitor will increase in capacitance while the other will decrease. This has little effect on the peak voltage charge of each capacitor, as there is negligible resistance in the charging current path from source to capacitor, resulting in a very short time constant (τ). However, when it comes time to discharge through the resistors, the capacitor with the greater capacitance value will hold its charge longer, resulting in a greater average DC voltage over time than the lesser-value capacitor. The load resistor (Rload), connected at one end to the point between the two equal-value resistors (R) and at the other end to ground, will drop no DC voltage if the two capacitors’ DC voltage charges are equal in magnitude. If, on the other hand, one capacitor maintains a greater DC voltage charge than the other due to a difference in capacitance, the load resistor will drop a voltage proportional to the difference between these voltages. Thus, differential capacitance is translated into a DC voltage across the load resistor. Across the load resistor, there is both AC and DC voltage present, with only the DC voltage being significant to the difference in capacitance. If desired, a low-pass filter may be added to the output of this circuit to block the AC, leaving only a DC signal to be interpreted by measurement circuitry: (Figure below) Addition of low-pass filter to “twin-T” feeds pure DC to measurement indicator. As a measurement circuit for differential capacitive sensors, the twin-T configuration enjoys many advantages over the standard bridge configuration. First and foremost, transducer displacement is indicated by a simple DC voltage, not an AC voltage whose magnitude and phase must be interpreted to tell which capacitance is greater. Furthermore, given the proper component values and power supply output, this DC output signal may be strong enough to directly drive an electromechanical meter movement, eliminating the need for an amplifier circuit. Another important advantage is that all important circuit elements have one terminal directly connected to ground: the source, the load resistor, and both capacitors are all ground-referenced. This helps minimize the ill effects of stray capacitance commonly plaguing bridge measurement circuits, likewise eliminating the need for compensatory measures such as the Wagner earth. This circuit is also easy to specify parts for. Normally, a measurement circuit incorporating complementary diodes requires the selection of “matched” diodes for good accuracy. Not so with this circuit! So long as the power supply voltage is significantly greater than the deviation in voltage drop between the two diodes, the effects of mismatch are minimal and contribute little to measurement error. Furthermore, supply frequency variations have a relatively low impact on gain (how much output voltage is developed for a given amount of transducer displacement), and square-wave supply voltage works as well as sine-wave, assuming a 50% duty cycle (equal positive and negative half-cycles), of course.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/12%3A_AC_Metering_Circuits/12.06%3A_AC_Instrumentation_Transducers.txt
After the introduction of the DC electrical distribution system by Edison in the United States, a gradual transition to the more economical AC system commenced. Lighting worked as well on AC as on DC. Transmission of electrical energy covered longer distances at lower loss with alternating current. However, motors were a problem with alternating current. Initially, AC motors were constructed like DC motors. Numerous problems were encountered due to changing magnetic fields, as compared to the static fields in DC motor motor field coils. AC electric motor family diagram. Charles P. Steinmetz contributed to solving these problems with his investigation of hysteresis losses in iron armatures. Nikola Tesla envisioned an entirely new type of motor when he visualized a spinning turbine, not spun by water or steam, but by a rotating magnetic field. His new type of motor, the AC induction motor, is the workhorse of industry to this day. Its ruggedness and simplicity (Figure above) make for long life, high reliability, and low maintenance. Yet small brushed AC motors, similar to the DC variety, persist in small appliances along with small Tesla induction motors. Above one horsepower (750 W), the Tesla motor reigns supreme. Modern solid state electronic circuits drive brushless DC motors with AC waveforms generated from a DC source. The brushless DC motor, actually an AC motor, is replacing the conventional brushed DC motor in many applications. And, the stepper motor, a digital version of motor, is driven by alternating current square waves, again, generated by solid state circuitry Figure above shows the family tree of the AC motors described in this chapter. Cruise ships and other large vessels replace reduction geared drive shafts with large multi-megawatt generators and motors. Such has been the case with diesel-electric locomotives on a smaller scale for many years. Motor system level diagram. At the system level, (Figure above) a motor takes in electrical energy in terms of a potential difference and a current flow, converting it to mechanical work. Alas, electric motors are not 100% efficient. Some of the electric energy is lost to heat, another form of energy, due to I2R losses in the motor windings. The heat is an undesired byproduct of the conversion. It must be removed from the motor and may adversely affect longevity. Thus, one goal is to maximize motor efficiency, reducing the heat loss. AC motors also have some losses not encountered by DC motors: hysteresis and eddy currents. Hysteresis and Eddy Current Early designers of AC motors encountered problems traced to losses unique to alternating current magnetics. These problems were encountered when adapting DC motors to AC operation. Though few AC motors today bear any resemblance to DC motors, these problems had to be solved before AC motors of any type could be properly designed before they were built. Both rotor and stator cores of AC motors are composed of a stack of insulated laminations. The laminations are coated with insulating varnish before stacking and bolting into the final form. Eddy currents are minimized by breaking the potential conductive loop into smaller less lossy segments. (Figure below) The current loops look like shorted transformer secondary turns. The thin isolated laminations break these loops. Also, the silicon (a semiconductor) added to the alloy used in the laminations increases electrical resistance which decreases the magnitude of eddy currents. Eddy currents in iron cores. If the laminations are made of silicon alloy grain oriented steel, hysteresis losses are minimized. Magnetic hysteresis is a lagging behind of magnetic field strength as compared to magnetizing force. If a soft iron nail is temporarily magnetized by a solenoid, one would expect the nail to lose the magnetic field once the solenoid is de-energized. However, a small amount of residual magnetization, Br due to hysteresis remains. (Figure below) An alternating current has to expend energy, -Hc the coercive force, in overcoming this residual magnetization before it can magnetize the core back to zero, let alone in the opposite direction. Hysteresis loss is encountered each time the polarity of the AC reverses. The loss is proportional to the area enclosed by the hysteresis loop on the B-H curve. “Soft” iron alloys have lower losses than “hard” high carbon steel alloys. Silicon grain oriented steel, 4% silicon, rolled to preferentially orient the grain or crystalline structure, has still lower losses. Hysteresis curves for low and high loss alloys. Once Steinmetz’s Laws of hysteresis could predict iron core losses, it was possible to design AC motors which performed as designed. This was akin to being able to design a bridge ahead of time that would not collapse once it was actually built. This knowledge of eddy current and hysteresis was first applied to building AC commutator motors similar to their DC counterparts. Today this is but a minor category of AC motors. Others invented new types of AC motors bearing little resemblance to their DC kin.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/13%3A_AC_Motors/13.01%3A_Introduction_to_AC_Motors.txt
Single Phase Synchronous Motors Single phase synchronous motors are available in small sizes for applications requiring precise timing such as time keeping, (clocks) and tape players. Though battery powered quartz regulated clocks are widely available, the AC line operated variety has better long term accuracy—over a period of months. This is due to power plant operators purposely maintaining the long term accuracy of the frequency of the AC distribution system. If it falls behind by a few cycles, they will make up the lost cycles of AC so that clocks lose no time. Large vs. Small Synchronous Motors Above 10 Horsepower (10 kW) the higher efficiency and leading power factor make large synchronous motors useful in industry. Large synchronous motors are a few percent more efficient than the more common induction motors. Though, the synchronous motor is more complex. Since motors and generators are similar in construction, it should be possible to use a generator as a motor, conversely, use a motor as a generator. A synchronous motor is similar to an alternator with a rotating field. The figure below shows small alternators with a permanent magnet rotating field. This figure below could either be two paralleled and synchronized alternators driven by a mechanical energy sources, or an alternator driving a synchronous motor. Or, it could be two motors, if an external power source were connected. The point is that in either case the rotors must run at the same nominal frequency, and be in phase with each other. That is, they must be synchronized. The procedure for synchronizing two alternators is to (1) open the switch, (2) drive both alternators at the same rotational rate, (3) advance or retard the phase of one unit until both AC outputs are in phase, (4) close the switch before they drift out of phase. Once synchronized, the alternators will be locked to each other, requiring considerable torque to break one unit loose (out of synchronization) from the other. Synchronous motor running in step with alternator. Accounting for Torque with Synchronous Motors If more torque in the direction of rotation is applied to the rotor of one of the above rotating alternators, the angle of the rotor will advance (opposite of (3)) with respect to the magnetic field in the stator coils while still synchronized and the rotor will deliver energy to the AC line like an alternator. The rotor will also be advanced with respect to the rotor in the other alternator. If a load such as a brake is applied to one of the above units, the angle of the rotor will lag the stator field as at (3), extracting energy from the AC line, like a motor. If excessive torque or drag is applied, the rotor will exceed the maximum torque angle advancing or lagging so much that synchronization is lost. Torque is developed only when synchronization of the motor is maintained. Bringing Synchronous Motors up to Speed In the case of a small synchronous motor in place of the alternator Figure (above right), it is not necessary to go through the elaborate synchronization procedure for alternators. However, the synchronous motor is not self-starting and must still be brought up to the approximate alternator electrical speed before it will lock (synchronize) to the generator rotational rate. Once up to speed, the synchronous motor will maintain synchronism with the AC power source and develop torque. Sinewave drives synchronous motor. Assuming that the motor is up to synchronous speed, as the sine wave changes to positive in Figure above (1), the lower north coil pushes the north rotor pole, while the upper south coil attracts that rotor north pole. In a similar manner the rotor south pole is repelled by the upper south coil and attracted to the lower north coil. By the time that the sine wave reaches a peak at (2), the torque holding the north pole of the rotor up is at a maximum. This torque decreases as the sine wave decreases to 0 VDC at (3) with the torque at a minimum. As the sine wave changes to negative between (3&4), the lower south coil pushes the south rotor pole, while attracting rotor north rotor pole. In a similar manner, the rotor north pole is repelled by the upper north coil and attracted to the lower south coil. At (4) the sinewave reaches a negative peak with holding torque again at a maximum. As the sine wave changes from negative to 0 VDC to positive, The process repeats for a new cycle of sine wave. Note, the above figure illustrates the rotor position for a no-load condition (α=0o). In actual practice, loading the rotor will cause the rotor to lag the positions shown by angle α. This angle increases with loading until the maximum motor torque is reached at α=90o electrical. Synchronization and torque are lost beyond this angle. The current in the coils of a single phase synchronous motor pulsates while alternating polarity. If the permanent magnet rotor speed is close to the frequency of this alternation, it synchronizes to this alternation. Since the coil field pulsates and does not rotate, it is necessary to bring the permanent magnet rotor up to speed with an auxiliary motor. This is a small induction motor similar to those in the next section. Addition of field poles decreases speed. A 2-pole (pair of N-S poles) alternator will generate a 60 Hz sine wave when rotated at 3600 rpm (revolutions per minute). The 3600 rpm corresponds to 60 revolutions per second. A similar 2-pole permanent magnet synchronous motor will also rotate at 3600 rpm. A lower speed motor may be constructed by adding more pole pairs. A 4-pole motor would rotate at 1800 rpm, a 12-pole motor at 600 rpm. The style of construction shown (Figure above) is for illustration. Higher efficiency higher torque multi-pole stator synchronous motors actually have multiple poles in the rotor. One-winding 12-pole synchronous motor. Rather than wind 12-coils for a 12-pole motor, wind a single coil with twelve interdigitated steel poles pieces as shown in Figure above. Though the polarity of the coil alternates due to the applied AC, assume that the top is temporarily north, the bottom south. Pole pieces route the south flux from the bottom and outside of the coil to the top. These 6-souths are interleaved with 6-north tabs bent up from the top of the steel pole piece of the coil. Thus, a permanent magnet rotor bar will encounter 6-pole pairs corresponding to 6-cycles of AC in one physical rotation of the bar magnet. The rotation speed will be 1/6 of the electrical speed of the AC. Rotor speed will be 1/6 of that experienced with a 2-pole synchronous motor. Example: 60 Hz would rotate a 2-pole motor at 3600 rpm, or 600 rpm for a 12-pole motor. Reprinted by permission of Westclox History at www.clockHistory.com The stator (Figure above) shows a 12-pole Westclox synchronous clock motor. Construction is similar to the previous figure with a single coil. The one coil style of construction is economical for low torque motors. This 600 rpm motor drives reduction gears moving clock hands. If the Westclox motor were to run at 600 rpm from a 50 Hz power source, how many poles would be required? A 10-pole motor would have 5-pairs of N-S poles. It would rotate at 50/5 = 10 rotations per second or 600 rpm (10 s-1 x 60 s/minute.) Reprinted by permission of Westclox History at www.clockHistory.com The rotor (Figure above) consists of a permanent magnet bar and a steel induction motor cup. The synchronous motor bar rotating within the pole tabs keeps accurate time. The induction motor cup outside of the bar magnet fits outside and over the tabs for self starting. At one time non-self-starting motors without the induction motor cup were manufactured. 3-Phase Synchronous Motors A 3-phase synchronous motor as shown in Figure below generates an electrically rotating field in the stator. Such motors are not self starting if started from a fixed frequency power source such as 50 or 60 Hz as found in an industrial setting. Furthermore, the rotor is not a permanent magnet as shown below for the multi-horsepower (multi-kilowatt) motors used in industry, but an electromagnet. Large industrial synchronous motors are more efficient than induction motors. They are used when constant speed is required. Having a leading power factor, they can correct the AC line for a lagging power factor. The three phases of stator excitation add vectorially to produce a single resultant magnetic field which rotates f/2n times per second, where f is the power line frequency, 50 or 60 Hz for industrial power line operated motors. The number of poles is n. For rotor speed in rpm, multiply by 60. The 3-phase 4-pole (per phase) synchronous motor (Figure below) will rotate at 1800 rpm with 60 Hz power or 1500 rpm with 50 Hz power. If the coils are energized one at a time in the sequence φ-1, φ-2, φ-3, the rotor should point to the corresponding poles in turn. Since the sine waves actually overlap, the resultant field will rotate, not in steps, but smoothly. For example, when the φ-1 and φ-2 sinewaves coincide, the field will be at a peak pointing between these poles. The bar magnet rotor shown is only appropriate for small motors. The rotor with multiple magnet poles (below right) is used in any efficient motor driving a substantial load. These will be slip ring fed electromagnets in large industrial motors. Large industrial synchronous motors are self started by embedded squirrel cage conductors in the armature, acting like an induction motor. The electromagnetic armature is only energized after the rotor is brought up to near synchronous speed. Three phase, 4-pole synchronous motor Small Multi-Phase Synchronous Motors Small multi-phase synchronous motors (Figure above) may be started by ramping the drive frequency from zero to the final running frequency. The multi-phase drive signals are generated by electronic circuits, and will be square waves in all but the most demanding applications. Such motors are known as brushless DC motors. True synchronous motors are driven by sine waveforms. Two or three phase drive may be used by supplying the appropriate number of windings in the stator. Only 3-phase is shown above. Electronic synchronous motor The block diagram (Figure above) shows the drive electronics associated with a low voltage (12 VDC) synchronous motor. These motors have a position sensor integrated within the motor, which provides a low level signal with a frequency proportional to the speed of rotation of the motor. The position sensor could be as simple as as solid state magnetic field sensors such as Hall effect devices providing commutation (armature current direction) timing to the drive electronics The position sensor could be a high resolution angular sensor such as a resolver, an inductosyn (magnetic encoder), or an optical encoder. If constant and accurate speed of rotation is required, (as for a disk drive) a tachometer and phase locked loop may be included. (Figure below) This tachometer signal, a pulse train proportional to motor speed, is fed back to a phase locked loop, which compares the tachometer frequency and phase to a stable reference frequency source such as a crystal oscillator. Phase locked loop controls synchronous motor speed. Brushless DC Motor A motor driven by square waves of current, as provided by simple hall effect sensors, is known as a brushless DC motor. This type of motor has higher ripple torque torque variation through a shaft revolution than a sine wave driven motor. This is not a problem for many applications. Though, we are primarily interested in synchronous motors in this section. Motor ripple torque and mechanical analog. Ripple torque, or cogging is caused by magnetic attraction of the rotor poles to the stator pole pieces. (Figure above) Note that there are no stator coils, not even a motor. The PM rotor may be rotated by hand but will encounter attraction to the pole pieces when near them. This is analogous to the mechanical situation. Would ripple torque be a problem for a motor used in a tape player? Yes, we do not want the motor to alternately speed and slow as it moves audio tape past a tape playback head. Would ripple torque be a problem for a fan motor? No. Windings distributed in a belt produce a more sinusoidal field. If a motor is driven by sinewaves of current synchronous with the motor back emf, it is classified as a synchronous AC motor, regardless of whether the drive waveforms are generated by electronic means. A synchronous motor will generate a sinusoidal back emf if the stator magnetic field has a sinusoidal distribution. It will be more sinusoidal if pole windings are distributed in a belt (Figure above) across many slots instead of concentrated on one large pole (as drawn in most of our simplified illustrations). This arrangement cancels many of the stator field odd harmonics. Slots having fewer windings at the edge of the phase winding may share the space with other phases. Winding belts may take on an alternate concentric form as shown in Figure below. Concentric belts. For a 2-phase motor, driven by a sinewave, the torque is constant throughout a revolution by the trigonometric identity: The generation and synchronization of the drive waveform requires a more precise rotor position indication than provided by the hall effect sensors used in brushless DC motors. A resolver, or optical or magnetic encoder provides resolution of hundreds to thousands of parts (pulses) per revolution. A resolver provides analog angular position signals in the form of signals proportional to the sine and cosine of shaft angle. Encoders provide a digital angular position indication in either serial or parallel format. The sine wave drive may actually be from a PWM, Pulse Width Modulator, a high efficiency method of approximating a sinewave with a digital waveform. (Figure below) Each phase requires drive electronics for this wave form phase-shifted by the appropriate amount per phase. PWM approximates a sinewave. Benefits of Synchronous Motor Synchronous motor efficiency is higher than that of induction motors. The synchronous motor can also be smaller, especially if high energy permanent magnets are used in the rotor. The advent of modern solid state electronics makes it possible to drive these motors at variable speed. Induction motors are mostly used in railway traction. However, a small synchronous motor, which mounts inside a drive wheel, makes it attractive for such applications. The high temperature superconducting version of this motor is one fifth to one third the weight of a copper wound motor.[1] The largest experimental superconducting synchronous motor is capable of driving a naval destroyer class ship. In all these applications the electronic variable speed drive is essential. The variable speed drive must also reduce the drive voltage at low speed due to decreased inductive reactance at lower frequency. To develop maximum torque, the rotor needs to lag the stator field direction by 90o. Any more, it loses synchronization. Much less results in reduced torque. Thus, the position of the rotor needs to be known accurately. And the position of the rotor with respect to the stator field needs to be calculated, and controlled. This type of control is known as vector phase control. It is implemented with a fast microprocessor driving a pulse width modulator for the stator phases. The stator of a synchronous motor is the same as that of the more popular induction motor. As a result the industrial grade electronic speed control used with induction motors is also applicable to large industrial synchronous motors. If the rotor and stator of a conventional rotary synchronous motor are unrolled, a synchronous linear motor results. This type of motor is applied to precise high speed linear positioning.[2]
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/13%3A_AC_Motors/13.02%3A_Synchronous_Motors.txt
Synchronous motors load the power line with a leading power factor. This is often useful in cancelling out the more commonly encountered lagging power factor caused by induction motors and other inductive loads. Originally, large industrial synchronous motors came into wide use because of this ability to correct the lagging power factor of induction motors. This leading power factor can be exaggerated by removing the mechanical load and over exciting the field of the synchronous motor. Such a device is known as a synchronous condenser. Furthermore, the leading power factor can be adjusted by varying the field excitation. This makes it possible to nearly cancel an arbitrary lagging power factor to unity by paralleling the lagging load with a synchronous motor. A synchronous condenser is operated in a borderline condition between a motor and a generator with no mechanical load to fulfill this function. It can compensate either a leading or lagging power factor, by absorbing or supplying reactive power to the line. This enhances power line voltage regulation. Since a synchronous condenser does not supply a torque, the output shaft may be dispensed with and the unit easily enclosed in a gas tight shell. The synchronous condenser may then be filled with hydrogen to aid cooling and reduce windage losses. Since the density of hydrogen is 7% of that of air, the windage loss for a hydrogen filled unit is 7% of that encountered in air. Furthermore, the thermal conductivity of hydrogen is ten times that of air. Thus, heat removal is ten times more efficient. As a result, a hydrogen filled synchronous condenser can be driven harder than an air cooled unit, or it may be physically smaller for a given capacity. There is no explosion hazard as long as the hydrogen concentration is maintained above 70%, typically above 91%. The efficiency of long power transmission lines may be increased by placing synchronous condensers along the line to compensate lagging currents caused by line inductance. More real power may be transmitted through a fixed size line if the power factor is brought closer to unity by synchronous condensers absorbing reactive power. The ability of synchronous condensers to absorb or produce reactive power on a transient basis stabilizes the power grid against short circuits and other transient fault conditions. Transient sags and dips of milliseconds duration are stabilized. This supplements longer response times of quick acting voltage regulation and excitation of generating equipment. The synchronous condenser aids voltage regulation by drawing leading current when the line voltage sags, which increases generator excitation thereby restoring line voltage. (Figure below) A capacitor bank does not have this ability. Synchronous condenser improves power line voltage regulation. 13.04: Reluctance Motor The variable reluctance motor is based on the principle that an unrestrained piece of iron will move to complete a magnetic flux path with minimum reluctance, the magnetic analog of electrical resistance. (Figure below) Synchronous reluctance If the rotating field of a large synchronous motor with salient poles is de-energized, it will still develop 10 or 15% of synchronous torque. This is due to variable reluctance throughout a rotor revolution. There is no practical application for a large synchronous reluctance motor. However, it is practical in small sizes. If slots are cut into the conductorless rotor of an induction motor, corresponding to the stator slots, a synchronous reluctance motor results. It starts like an induction motor but runs with a small amount of synchronous torque. The synchronous torque is due to changes in reluctance of the magnetic path from the stator through the rotor as the slots align. This motor is an inexpensive means of developing a moderate synchronous torque. Low power factor, low pull-out torque, and low efficiency are characteristics of the direct power line driven variable reluctance motor. Such was the status of the variable reluctance motor for a century before the development of semiconductor power control. Switched reluctance If an iron rotor with poles, but without any conductors, is fitted to a multi-phase stator, a switched reluctance motor, capable of synchronizing with the stator field results. When a stator coil pole pair is energized, the rotor will move to the lowest magnetic reluctance path. (Figure below) A switched reluctance motor is also known as a variable reluctance motor. The reluctance of the rotor to stator flux path varies with the position of the rotor. Reluctance is a function of rotor position in a variable reluctance motor. Sequential switching (Figure below) of the stator phases moves the rotor from one position to the next. The mangetic flux seeks the path of least reluctance, the magnetic analog of electric resistance. This is an over simplified rotor and waveforms to illustrate operation. Variable reluctance motor, over-simplified operation. If one end of each 3-phase winding of the switched reluctance motor is brought out via a common lead wire, we can explain operation as if it were a stepper motor. (Figure above) The other coil connections are successively pulled to ground, one at a time, in a wave drive pattern. This attracts the rotor to the clockwise rotating magnetic field in 60o increments. Various waveforms may drive variable reluctance motors. (Figure below) Wave drive (a) is simple, requiring only a single ended unipolar switch. That is, one which only switches in one direction. More torque is provided by the bipolar drive (b), but requires a bipolar switch. The power driver must pull alternately high and low. Waveforms (a & b) are applicable to the stepper motor version of the variable reluctance motor. For smooth vibration free operation the 6-step approximation of a sine wave (c) is desirable and easy to generate. Sine wave drive (d) may be generated by a pulse width modulator (PWM), or drawn from the power line. Variable reluctance motor drive waveforms: (a) unipolar wave drive, (b) bipolar full step (c) sinewave (d) bipolar 6-step. Doubling the number of stator poles decreases the rotating speed and increases torque. This might eliminate a gear reduction drive. A variable reluctance motor intended to move in discrete steps, stop, and start is a variable reluctance stepper motor, covered in another section. If smooth rotation is the goal, there is an electronic driven version of the switched reluctance motor. Variable reluctance motors or steppers actually use rotors like those in Figure below. Electronic driven variable reluctance motor Variable reluctance motors are poor performers when direct power line driven. However, microprocessors and solid state power drive makes this motor an economical high performance solution in some high volume applications. Though difficult to control, this motor is easy to spin. Sequential switching of the field coils creates a rotating magnetic field which drags the irregularly shaped rotor around with it as it seeks out the lowest magnetic reluctance path. The relationship between torque and stator current is highly nonlinear– difficult to control. Electronic driven variable reluctance motor. An electronic driven variable reluctance motor (Figure below) resembles a brushless DC motor without a permanent magnet rotor. This makes the motor simple and inexpensive. However, this is offset by the cost of the electronic control, which is not nearly as simple as that for a brushless DC motor. While the variable reluctance motor is simple, even more so than an induction motor, it is difficult to control. Electronic control solves this problem and makes it practical to drive the motor well above and below the power line frequency. A variable reluctance motor driven by a servo, an electronic feedback system, controls torque and speed, minimizing ripple torque. Figure below Electronic driven variable reluctance motor. This is the opposite of the high ripple torque desired in stepper motors. Rather than a stepper, a variable reluctance motor is optimized for continuous high speed rotation with minimum ripple torque. It is necessary to measure the rotor position with a rotary position sensor like an optical or magnetic encoder, or derive this from monitoring the stator back EMF. A microprocessor performs complex calculations for switching the windings at the proper time with solid state devices. This must be done precisely to minimize audible noise and ripple torque. For lowest ripple torque, winding current must be monitored and controlled. The strict drive requirements make this motor only practical for high volume applications like energy efficient vacuum cleaner motors, fan motors, or pump motors. One such vacuum cleaner uses a compact high efficiency electronic driven 100,000 rpm fan motor. The simplicity of the motor compensates for the drive electronics cost. No brushes, no commutator, no rotor windings, no permanent magnets, simplifies motor manufacture. The efficiency of this electronic driven motor can be high. But, it requires considerable optimization, using specialized design techniques, which is only justified for large manufacturing volumes. Advantages • Simple construction- no brushes, commutator, or permanent magnets, no Cu or Al in the rotor. • High efficiency and reliability compared to conventional AC or DC motors. • High starting torque. • Cost effective compared to bushless DC motor in high volumes. • Adaptable to very high ambient temperature. • Low cost accurate speed control possible if volume is high enough. Disadvantages • Current versus torque is highly nonlinear • Phase switching must be precise to minimize ripple torque • Phase current must be controlled to minimize ripple torque • Acoustic and electrical noise • Not applicable to low volumes due to complex control issues
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/13%3A_AC_Motors/13.03%3A_Synchronous_Condenser.txt
A stepper motor is a “digital” version of the electric motor. The rotor moves in discrete steps as commanded, rather than rotating continuously like a conventional motor. When stopped but energized, a stepper (short for stepper motor) holds its load steady with a holding torque. Wide spread acceptance of the stepper motor within the last two decades was driven by the ascendancy of digital electronics. Modern solid state driver electronics was a key to its success. And, microprocessors readily interface to stepper motor driver circuits. Application wise, the predecessor of the stepper motor was the servo motor. Today this is a higher cost solution to high performance motion control applications. The expense and complexity of a servomotor is due to the additional system components: position sensor and error amplifier. (Figure below) It is still the way to position heavy loads beyond the grasp of lower power steppers. High acceleration or unusually high accuracy still requires a servo motor. Otherwise, the default is the stepper due to low cost, simple drive electronics, good accuracy, good torque, moderate speed, and low cost. Stepper motor vs servo motor. A stepper motor positions the read-write heads in a floppy drive. They were once used for the same purpose in harddrives. However, the high speed and accuracy required of modern harddrive head positioning dictates the use of a linear servomotor (voice coil). The servo amplifier is a linear amplifier with some difficult to integrate discrete components. A considerable design effort is required to optimize the servo amplifier gain vs phase response to the mechanical components. The stepper motor drivers are less complex solid state switches, being either “on” or “off”. Thus, a stepper motor controller is less complex and costly than a servo motor controller. Slo-syn synchronous motors can run from AC line voltage like a single-phase permanent-capacitor induction motor. The capacitor generates a 90o second phase. With the direct line voltage, we have a 2-phase drive. Drive waveforms of bipolar (±) square waves of 2-24V are more common these days. The bipolar magnetic fields may also be generated from unipolar (one polarity) voltages applied to alternate ends of a center tapped winding. (Figure below) In other words, DC can be switched to the motor so that it sees AC. As the windings are energized in sequence, the rotor synchronizes with the consequent stator magnetic field. Thus, we treat stepper motors as a class of AC synchronous motor. Unipolar drive of center tapped coil at (b), emulates AC current in single coil at (a). Characteristics Stepper motors are rugged and inexpensive because the rotor contains no winding slip rings, or commutator. The rotor is a cylindrical solid, which may also have either salient poles or fine teeth. More often than not the rotor is a permanent magnet. Determine that the rotor is a permanent magnet by unpowered hand rotation showing detent torque, torque pulsations. Stepper motor coils are wound within a laminated stator, except for can stack construction. There may be as few as two winding phases or as many as five. These phases are frequently split into pairs. Thus, a 4-pole stepper motor may have two phases composed of in-line pairs of poles spaced 90o apart. There may also be multiple pole pairs per phase. For example a 12-pole stepper has 6-pairs of poles, three pairs per phase. Since stepper motors do not necessarily rotate continuously, there is no horsepower rating. If they do rotate continuously, they do not even approach a sub-fractional hp rated capability. They are truly small low power devices compared to other motors. They have torque ratings to a thousand in-oz (inch-ounces) or ten n-m (newton-meters) for a 4 kg size unit. A small “dime” size stepper has a torque of a hundredth of a newton-meter or a few inch-ounces. Most steppers are a few inches in diameter with a fraction of a n-m or a few in-oz torque. The torque available is a function of motor speed, load inertia, load torque, and drive electronics as illustrated on the speed vs torque curve. (Figure below) An energized, holding stepper has a relatively high holding torque rating. There is less torque available for a running motor, decreasing to zero at some high speed. This speed is frequently not attainable due to mechanical resonance of the motor load combination. Stepper speed characteristics. Stepper motors move one step at a time, the step angle, when the drive waveforms are changed. The step angle is related to motor construction details: number of coils, number of poles, number of teeth. It can be from 90o to 0.75o, corresponding to 4 to 500 steps per revolution. Drive electronics may halve the step angle by moving the rotor in half-steps. Steppers cannot achieve the speeds on the speed torque curve instantaneously. The maximum start frequency is the highest rate at which a stopped and unloaded stepper can be started. Any load will make this parameter unattainable. In practice, the step rate is ramped up during starting from well below the maximum start frequency. When stopping a stepper motor, the step rate may be decreased before stopping. The maximum torque at which a stepper can start and stop is the pull-in torque. This torque load on the stepper is due to frictional (brake) and inertial (flywheel) loads on the motor shaft. Once the motor is up to speed, pull-out torque is the maximum sustainable torque without losing steps. There are three types of stepper motors in order of increasing complexity: variable reluctance, permanent magnet, and hybrid. The variable reluctance stepper has s solid soft steel rotor with salient poles. The permanent magnet stepper has a cylindrical permanent magnet rotor. The hybrid stepper has soft steel teeth added to the permanent magnet rotor for a smaller step angle. Variable reluctance stepper A variable reluctance stepper motor relies upon magnetic flux seeking the lowest reluctance path through a magnetic circuit. This means that an irregularly shaped soft magnetic rotor will move to complete a magnetic circuit, minimizing the length of any high reluctance air gap. The stator typically has three windings distributed between pole pairs , the rotor four salient poles, yielding a 30o step angle.(Figure below) A de-energized stepper with no detent torque when hand rotated is identifiable as a variable reluctance type stepper. Three phase and four phase variable reluctance stepper motors. The drive waveforms for the 3-φ stepper can be seen in the “Reluctance motor” section. The drive for a 4-φ stepper is shown in Figure below. Sequentially switching the stator phases produces a rotating magnetic field which the rotor follows. However, due to the lesser number of rotor poles, the rotor moves less than the stator angle for each step. For a variable reluctance stepper motor, the step angle is given by: Stepping sequence for variable reluctance stepper. In Figure above, moving from φ1 to φ2, etc., the stator magnetic field rotates clockwise. The rotor moves counterclockwise (CCW). Note what does not happen! The dotted rotor tooth does not move to the next stator tooth. Instead, the φ2 stator field attracts a different tooth in moving the rotor CCW, which is a smaller angle (15o) than the stator angle of 30o. The rotor tooth angle of 45o enters into the calculation by the above equation. The rotor moved CCW to the next rotor tooth at 45o, but it aligns with a CW by 30o stator tooth. Thus, the actual step angle is the difference between a stator angle of 45o and a rotor angle of 30o . How far would the stepper rotate if the rotor and stator had the same number of teeth? Zero– no notation. Starting at rest with phase φ1 energized, three pulses are required (φ2, φ3, φ4) to align the “dotted” rotor tooth to the next CCW stator Tooth, which is 45o. With 3-pulses per stator tooth, and 8-stator teeth, 24-pulses or steps move the rotor through 360o. By reversing the sequence of pulses, the direction of rotation is reversed above right. The direction, step rate, and number of steps are controlled by a stepper motor controller feeding a driver or amplifier. This could be combined into a single circuit board. The controller could be a microprocessor or a specialized integrated circuit. The driver is not a linear amplifier, but a simple on-off switch capable of high enough current to energize the stepper. In principle, the driver could be a relay or even a toggle switch for each phase. In practice, the driver is either discrete transistor switches or an integrated circuit. Both driver and controller may be combined into a single integrated circuit accepting a direction command and step pulse. It outputs current to the proper phases in sequence. Variable reluctance stepper motor. Disassemble a reluctance stepper to view the internal components. Otherwise, we show the internal construction of a variable reluctance stepper motor in Figure above. The rotor has protruding poles so that they may be attracted to the rotating stator field as it is switched. An actual motor, is much longer than our simplified illustration. Variable reluctance stepper drives lead screw. The shaft is frequently fitted with a drive screw. (Figure above) This may move the heads of a floppy drive upon command by the floppy drive controller. Variable reluctance stepper motors are applied when only a moderate level of torque is required and a coarse step angle is adequate. A screw drive, as used in a floppy disk drive is such an application. When the controller powers-up, it does not know the position of the carriage. However, it can drive the carriage toward the optical interrupter, calibrating the position at which the knife edge cuts the interrupter as “home”. The controller counts step pulses from this position. As long as the load torque does not exceed the motor torque, the controller will know the carriage position. Summary: variable reluctance stepper motor • The rotor is a soft iron cylinder with salient (protruding) poles. • This is the least complex, most inexpensive stepper motor. • The only type stepper with no detent torque in hand rotation of a de-energized motor shaft. • Large step angle • A lead screw is often mounted to the shaft for linear stepping motion. Permanent magnet stepper A permanent magnet stepper motor has a cylindrical permanent magnet rotor. The stator usually has two windings. The windings could be center tapped to allow for a unipolar driver circuit where the polarity of the magnetic field is changed by switching a voltage from one end to the other of the winding. A bipolar drive of alternating polarity is required to power windings without the center tap. A pure permanent magnet stepper usually has a large step angle. Rotation of the shaft of a de-energized motor exhibits detent torque. If the detent angle is large, say 7.5o to 90o, it is likely a permanent magnet stepper rather than a hybrid stepper (next subsection). Permanent magnet stepper motors require phased alternating currents applied to the two (or more) windings. In practice, this is almost always square waves generated from DC by solid state electronics. Bipolar drive is square waves alternating between (+) and (-) polarities, say, +2.5 V to -2.5 V. Unipolar drive supplies a (+) and (-) alternating magnetic flux to the coils developed from a pair of positive square waves applied to opposite ends of a center tapped coil. The timing of the bipolar or unipolar wave is wave drive, full step, or half step. Wave drive PM wave drive sequence (a) φ1+ , (b) φ2+ , (c) φ1- , (d) φ2-. Conceptually, the simplest drive is wave drive. (Figure above) The rotation sequence left to right is positive φ-1 points rotor north pole up, (+) φ-2 points rotor north right, negative φ-1 attracts rotor north down, (-) φ-2 points rotor left. The wave drive waveforms below show that only one coil is energized at a time. While simple, this does not produce as much torque as other drive techniques. Waveforms: bipolar wave drive. The waveforms (Figure above) are bipolar because both polarities , (+) and (-) drive the stepper. The coil magnetic field reverses because the polarity of the drive current reverses. Waveforms: unipolar wave drive. The (Figure above) waveforms are unipolar because only one polarity is required. This simplifies the drive electronics, but requires twice as many drivers. There are twice as many waveforms because a pair of (+) waves is required to produce an alternating magnetic field by application to opposite ends of a center tapped coil. The motor requires alternating magnetic fields. These may be produced by either unipolar or bipolar waves. However, motor coils must have center taps for unipolar drive. Permanent magnet stepper motors are manufactured with various lead-wire configurations. (Figure below) Stepper motor wiring diagrams. The 4-wire motor can only be driven by bipolar waveforms. The 6-wire motor, the most common arrangement, is intended for unipolar drive because of the center taps. Though, it may be driven by bipolar waves if the center taps are ignored. The 5-wire motor can only be driven by unipolar waves, as the common center tap interferes if both windings are energized simultaneously. The 8-wire configuration is rare, but provides maximum flexibility. It may be wired for unipolar drive as for the 6-wire or 5-wire motor. A pair of coils may be connected in series for high voltage bipolar low current drive, or in parallel for low voltage high current drive A bifilar winding is produced by winding the coils with two wires in parallel, often a red and green enamelled wire. This method produces exact 1:1 turns ratios for center tapped windings. This winding method is applicable to all but the 4-wire arrangement above. Full step drive Full step drive provides more torque than wave drive because both coils are energized at the same time. This attracts the rotor poles midway between the two field poles. (Figure below) Full step, bipolar drive. Full step bipolar drive as shown in Figure above has the same step angle as wave drive. Unipolar drive (not shown) would require a pair of unipolar waveforms for each of the above bipolar waveforms applied to the ends of a center tapped winding. Unipolar drive uses a less complex, less expensive driver circuit. The additional cost of bipolar drive is justified when more torque is required. Half step drive The step angle for a given stepper motor geometry is cut in half with half step drive. This corresponds to twice as many step pulses per revolution. (Figure below) Half stepping provides greater resolution in positioning of the motor shaft. For example, half stepping the motor moving the print head across the paper of an inkjet printer would double the dot density. Half step, bipolar drive. Half step drive is a combination of wave drive and full step drive with one winding energized, followed by both windings energized, yielding twice as many steps. The unipolar waveforms for half step drive are shown above. The rotor aligns with the field poles as for wave drive and between the poles as for full step drive. Microstepping is possible with specialized controllers. By varying the currents to the windings sinusoidally many microsteps can be interpolated between the normal positions. Construction The contruction of a permanent magnet stepper motor is considerably different from the drawings above. It is desirable to increase the number of poles beyond that illustrated to produce a smaller step angle. It is also desirable to reduce the number of windings, or at least not increase the number of windings for ease of manufacture. Permanent magnet stepper motor, 24-pole can-stack construction. The permanent magnet stepper (Figure above) only has two windings, yet has 24-poles in each of two phases. This style of construction is known as can stack. A phase winding is wrapped with a mild steel shell, with fingers brought to the center. One phase, on a transient basis, will have a north side and a south side. Each side wraps around to the center of the doughnut with twelve interdigitated fingers for a total of 24 poles. These alternating north-south fingers will attract the permanent magnet rotor. If the polarity of the phase were reversed, the rotor would jump 360o/24 = 15o. We do not know which direction, which is not usefull. However, if we energize φ-1 followed by φ-2, the rotor will move 7.5o because the φ-2 is offset (rotated) by 7.5o from φ-1. See below for offset. And, it will rotate in a reproducible direction if the phases are alternated. Application of any of the above waveforms will rotate the permanent magnet rotor. Note that the rotor is a gray ferrite ceramic cylinder magnetized in the 24-pole pattern shown. This can be viewed with magnet viewer film or iron filings applied to a paper wrapping. Though, the colors will be green for both north and south poles with the film. (a) External view of can stack, (b) field offset detail. Can-stack style construction of a PM stepper is distinctive and easy to identify by the stacked “cans”. (Figure above) Note the rotational offset between the two phase sections. This is key to making the rotor follow the switching of the fields between the two phases. Summary: permanent magnet stepper motor • The rotor is a permanent magnet, often a ferrite sleeve magnetized with numerous poles. • Can-stack construction provides numerous poles from a single coil with interleaved fingers of soft iron. • Large to moderate step angle. • Often used in computer printers to advance paper. Hybrid stepper motor The hybrid stepper motor combines features of both the variable reluctance stepper and the permanent magnet stepper to produce a smaller step angle. The rotor is a cylindrical permanent magnet, magnetized along the axis with radial soft iron teeth (Figure below). The stator coils are wound on alternating poles with corresponding teeth. There are typically two winding phases distributed between pole pairs. This winding may be center tapped for unipolar drive. The center tap is achieved by a bifilar winding, a pair of wires wound physically in parallel, but wired in series. The north-south poles of a phase swap polarity when the phase drive current is reversed. Bipolar drive is required for un-tapped windings. Hybrid stepper motor. Note that the 48-teeth on one rotor section are offset by half a pitch from the other. See rotor pole detail above. This rotor tooth offset is also shown below. Due to this offset, the rotor effectively has 96 interleaved poles of opposite polarity. This offset allows for rotation in 1/96 th of a revolution steps by reversing the field polarity of one phase. Two phase windings are common as shown above and below. Though, there could be as many as five phases. The stator teeth on the 8-poles correspond to the 48-rotor teeth, except for missing teeth in the space between the poles. Thus, one pole of the rotor, say the south pole, may align with the stator in 48 distinct positions. However, the teeth of the south pole are offset from the north teeth by half a tooth. Therefore, the rotor may align with the stator in 96 distinct positions. This half tooth offset shows in the rotor pole detail above, or Figure below. As if this were not complicated enough, the stator main poles are divided into two phases (φ-1, φ-2). These stator phases are offset from one another by one-quarter of a tooth. This detail is only discernable on the schematic diagrams below. The result is that the rotor moves in steps of a quarter of a tooth when the phases are alternately energized. In other words, the rotor moves in 2×96=192 steps per revolution for the above stepper. The above drawing is representative of an actual hybrid stepper motor. However, we provide a simplified pictorial and schematic representation (Figure below) to illustrate details not obvious above. Note the reduced number of coils and teeth in rotor and stator for simplicity. In the next two figures, we attempt to illustrate the quarter tooth rotation produced by the two stator phases offset by a quarter tooth, and the rotor half tooth offset. The quarter tooth stator offset in conjunction with drive current timing also defines direction of rotation. Hybrid stepper motor schematic diagram. Features of hybrid stepper schematic (Figure above) • The top of the permanent magnet rotor is the south pole, the bottom north. • The rotor north-south teeth are offset by half a tooth. • If the φ-1 stator is temporarily energized north top, south bottom. • The top φ-1 stator teeth align north to rotor top south teeth. • The bottom φ-1’ stator teeth align south to rotor bottom north teeth. • Enough torque applied to the shaft to overcome the hold-in torque would move the rotor by one tooth. • If the polarity of φ-1 were reversed, the rotor would move by one-half tooth, direction unknown. The alignment would be south stator top to north rotor bottom, north stator bottom to south rotor. • The φ-2 stator teeth are not aligned with the rotor teeth when φ-1 is energized. In fact, the φ-2 stator teeth are offset by one-quarter tooth. This will allow for rotation by that amount if φ-1 is de-energized and φ-2 energized. Polarity of φ-1 and drive determines direction of rotation. Hybrid stepper motor rotation sequence. Hybrid stepper motor rotation (Figure above) • Rotor top is permanent magnet south, bottom north. Fields φ1, φ-2 are switchable: on, off, reverse. • (a) φ-1=on=north-top, φ-2=off. Align (top to bottom): φ-1 stator-N:rotor-top-S, φ-1’ stator-S: rotor-bottom-N. Start position, rotation=0. • (b) φ-1=off, φ-2=on. Align (right to left): φ-2 stator-N-right:rotor-top-S, φ-2’ stator-S: rotor-bottom-N. Rotate 1/4 tooth, total rotation=1/4 tooth. • (c) φ-1=reverse(on), φ-2=off. Align (bottom to top): φ-1 stator-S:rotor-bottom-N, φ-1’ stator-N:rotor-top-S. Rotate 1/4 tooth from last position. Total rotation from start: 1/2 tooth. • Not shown: φ-1=off, φ-2=reverse(on). Align (left to right): Total rotation: 3/4 tooth. • Not shown: φ-1=on, φ-2=off (same as (a)). Align (top to bottom): Total rotation 1-tooth. An un-powered stepper motor with detent torque is either a permanent magnet stepper or a hybrid stepper. The hybrid stepper will have a small step angle, much less than the 7.5o of permanent magnet steppers. The step angle could be a fraction of a degree, corresponding to a few hundred steps per revolution. Summary: hybrid stepper motor • The step angle is smaller than variable reluctance or permanent magnet steppers. • The rotor is a permanent magnet with fine teeth. North and south teeth are offset by half a tooth for a smaller step angle. • The stator poles have matching fine teeth of the same pitch as the rotor. • The stator windings are divided into no less than two phases. • The poles of one stator windings are offset by a quarter tooth for an even smaller step angle.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/13%3A_AC_Motors/13.05%3A_Stepper_Motors.txt
Brushless DC motors were developed from conventional brushed DC motors with the availability of solid state power semiconductors. So, why do we discuss brushless DC motors in a chapter on AC motors? Brushless DC motors are similar to AC synchronous motors. The major difference is that synchronous motors develop a sinusoidal back EMF, as compared to a rectangular, or trapezoidal, back EMF for brushless DC motors. Both have stator created rotating magnetic fields producing torque in a magnetic rotor. Synchronous motors are usually large multi-kilowatt size, often with electromagnet rotors. True synchronous motors are considered to be single speed, a submultiple of the powerline frequency. Brushless DC motors tend to be small– a few watts to tens of watts, with permanent magnet rotors. The speed of a brushless DC motor is not fixed unless driven by a phased locked loop slaved to a reference frequency. The style of construction is either cylindrical or pancake. (Figures and below) Cylindrical construction: (a) outside rotor, (b) inside rotor. The most usual construction, cylindrical, can take on two forms (Figure above). The most common cylindrical style is with the rotor on the inside, above right. This style motor is used in hard disk drives. It is also possible to put the rotor on the outside surrounding the stator. Such is the case with brushless DC fan motors, sans the shaft. This style of construction may be short and fat. However, the direction of the magnetic flux is radial with respect to the rotational axis. Pancake motor construction: (a) single stator, (b) double stator. High torque pancake motors may have stator coils on both sides of the rotor (Figure above-b). Lower torque applications like floppy disk drive motors suffice with a stator coil on one side of the rotor, (Figure above-a). The direction of the magnetic flux is axial, that is, parallel to the axis of rotation. The commutation function may be performed by various shaft position sensors: optical encoder, magnetic encoder (resolver, synchro, etc), or Hall effect magnetic sensors. Small inexpensive motors use Hall effect sensors. (Figure below) A Hall effect sensor is a semiconductor device where the electron flow is affected by a magnetic field perpendicular to the direction of current flow.. It looks like a four terminal variable resistor network. The voltages at the two outputs are complementary. Application of a magnetic field to the sensor causes a small voltage change at the output. The Hall output may drive a comparator to provide for more stable drive to the power device. Or, it may drive a compound transistor stage if properly biased. More modern Hall effect sensors may contain an integrated amplifier, and digital circuitry. This 3-lead device may directly drive the power transistor feeding a phase winding. The sensor must be mounted close to the permanent magnet rotor to sense its position. Hall effect sensors commutate 3-φ brushless DC motor. The simple cylindrical 3-φ motor Figure above is commutated by a Hall effect device for each of the three stator phases. The changing position of the permanent magnet rotor is sensed by the Hall device as the polarity of the passing rotor pole changes. This Hall signal is amplified so that the stator coils are driven with the proper current. Not shown here, the Hall signals may be processed by combinatorial logic for more efficient drive waveforms. The above cylindrical motor could drive a hard drive if it were equipped with a phased locked loop (PLL) to maintain constant speed. Similar circuitry could drive the pancake floppy disk drive motor (Figure below). Again, it would need a PLL to maintain constant speed. Brushless pancake motor The 3-φ pancake motor (Figure above) has 6-stator poles and 8-rotor poles. The rotor is a flat ferrite ring magnetized with eight axially magnetized alternating poles. We do not show that the rotor is capped by a mild steel plate for mounting to the bearing in the middle of the stator. The steel plate also helps complete the magnetic circuit. The stator poles are also mounted atop a steel plate, helping to close the magnetic circuit. The flat stator coils are trapezoidal to more closely fit the coils, and approximate the rotor poles. The 6-stator coils comprise three winding phases. If the three stator phases were successively energized, a rotating magnetic field would be generated. The permanent magnet rotor would follow as in the case of a synchronous motor. A two pole rotor would follow this field at the same rotation rate as the rotating field. However, our 8-pole rotor will rotate at a submultiple of this rate due the the extra poles in the rotor. The brushless DC fan motor (Figure below) has these feature: • The stator has 2-phases distributed between 4-poles • There are 4-salient poles with no windings to eliminate zero torque points. • The rotor has four main drive poles. • The rotor has 8-poles superimposed to help eliminate zero torque points. • The Hall effect sensors are spaced at 45o physical. • The fan housing is placed atop the rotor, which is placed over the stator. The goal of a brushless fan motor is to minimize the cost of manufacture. This is an incentive to move lower performance products from a 3-φ to a 2-φ configuration. Depending on how it is driven, it may be called a 4-φ motor. You may recall that conventional DC motors cannot have an even number of armature poles (2,4, etc) if they are to be self-starting, 3,5,7 being common. Thus, it is possible for a hypothetical 4-pole motor to come to rest at a torque minima, where it cannot be started from rest. The addition of the four small salient poles with no windings superimposes a ripple torque upon the torque vs position curve. When this ripple torque is added to normal energized-torque curve, the result is that torque minima are partially removed. This makes it possible to start the motor for all possible stopping positions. The addition of eight permanant magnet poles to the normal 4-pole permanent magnet rotor superimposes a small second harmonic ripple torque upon the normal 4-pole ripple torque. This further removes the torque minima. As long as the torque minima does not drop to zero, we should be able to start the motor. The more successful we are in removing the torque minima, the easier the motor starting. The 2-φ stator requires that the Hall sensors be spaced apart by 90o electrical. If the rotor was a 2-pole rotor, the Hall sensors would be placed 90o physical. Since we have a 4-pole permanent magnet rotor, the sensors must be placed 45o physical to achieve the 90o electrical spacing. Note Hall spacing above. The majority of the torque is due to the interaction of the inside stator 2-φ coils with the 4-pole section of the rotor. Moreover, the 4-pole section of the rotor must be on the bottom so that the Hall sensors will sense the proper commutation signals. The 8-poles rotor section is only for improving motor starting. Brushless DC motor 2-φ push-pull drive. In Figure above, the 2-φ push-pull drive (also known as 4-φ drive) uses two Hall effect sensors to drive four windings. The sensors are spaced 90o electrical apart, which is 90o physical for a single pole rotor. Since the Hall sensor has two complementary outputs, one sensor provides commutation for two opposing windings.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/13%3A_AC_Motors/13.06%3A_Brushless_DC_Motor.txt
Most AC motors are induction motors. Induction motors are favored due to their ruggedness and simplicity. In fact, 90% of industrial motors are induction motors. Nikola Tesla conceived the basic principals of the polyphase induction motor in 1883, and had a half horsepower (400 watt) model by 1888. Tesla sold the manufacturing rights to George Westinghouse for \$65,000. Most large ( > 1 hp or 1 kW) industrial motors are poly-phase induction motors. By poly-phase, we mean that the stator contains multiple distinct windings per motor pole, driven by corresponding time shifted sine waves. In practice, this is two or three phases. Large industrial motors are 3-phase. While we include numerous illustrations of two-phase motors for simplicity, we must emphasize that nearly all poly-phase motors are three-phase. By induction motor, we mean that the stator windings induce a current flow in the rotor conductors, like a transformer, unlike a brushed DC commutator motor. AC Induction Motor Construction An induction motor is composed of a rotor, known as an armature, and a stator containing windings connected to a poly-phase energy source as shown in the figure below. The simple 2-phase induction motor below is similar to the 1/2 horsepower motor which Nikola Tesla introduced in 1888. Tesla polyphase induction motor. The stator in the figure above is wound with pairs of coils corresponding to the phases of electrical energy available. The 2-phase induction motor stator above has 2-pairs of coils, one pair for each of the two phases of AC. The individual coils of a pair are connected in series and correspond to the opposite poles of an electromagnet. That is, one coil corresponds to a N-pole, the other to a S-pole until the phase of AC changes polarity. The other pair of coils is oriented 90o in space to the first pair. This pair of coils is connected to AC shifted in time by 90o in the case of a 2-phase motor. In Tesla’s time, the source of the two phases of AC was a 2-phase alternator. The stator in the figure above has salient, obvious protruding poles, as used on Tesla’s early induction motor. This design is used to this day for sub-fractional horsepower motors (<50 watts). However, for larger motors less torque pulsation and higher efficiency results if the coils are embedded into slots cut into the stator laminations. (Figure below) Stator frame showing slots for windings. The stator laminations are thin insulated rings with slots punched from sheets of electrical grade steel. A stack of these is secured by end screws, which may also hold the end housings. Stator with (a) 2-φ and (b) 3-φ windings. In the figure above, the windings for both a two-phase motor and a three-phase motor have been installed in the stator slots. The coils are wound on an external fixture, then worked into the slots. Insulation wedged between the coil periphery and the slot protects against abrasion. Actual stator windings are more complex than the single windings per pole in the figure above. Comparing the 2-φ motor to Tesla’s 2-φ motor with salient poles, the number of coils is the same. In actual large motors, a pole winding, is divided into identical coils inserted into many smaller slots than above. This group is called a phase belt. See the figure below. The distributed coils of the phase belt cancel some of the odd harmonics, producing a more sinusoidal magnetic field distribution across the pole. This is shown in the synchronous motor section. The slots at the edge of the pole may have fewer turns than the other slots. Edge slots may contain windings from two phases. That is, the phase belts overlap. The key to the popularity of the AC induction motor is simplicity as evidenced by the simple rotor (see the figure below). The rotor consists of a shaft, a steel laminated rotor, and an embedded copper or aluminum squirrel cage, shown at (b) removed from the rotor. As compared to a DC motor armature, there is no commutator. This eliminates the brushes, arcing, sparking, graphite dust, brush adjustment and replacement, and re-machining of the commutator. Laminated rotor with (a) embedded squirrel cage, (b) conductive cage removed from rotor. The squirrel cage conductors may be skewed, twisted, with respsect to the shaft. The misalignment with the stator slots reduces torque pulsations. Both rotor and stator cores are composed of a stack of insulated laminations. The laminations are coated with insulating oxide or varnish to minimize eddy current losses. The alloy used in the laminations is selected for low hysteresis losses. Theory of Operation of Induction Motors A short explanation of operation is that the stator creates a rotating magnetic field which drags the rotor around. The theory of operation of induction motors is based on a rotating magnetic field. One means of creating a rotating magnetic field is to rotate a permanent magnet as shown in the figure below. If the moving magnetic lines of flux cut a conductive disk, it will follow the motion of the magnet. The lines of flux cutting the conductor will induce a voltage, and consequent current flow, in the conductive disk. This current flow creates an electromagnet whose polarity opposes the motion of the permanent magnet– Lenz’s Law. The polarity of the electromagnet is such that it pulls against the permanent magnet. The disk follows with a little less speed than the permanent magnet. Rotating magnetic field produces torque in conductive disk. The torque developed by the disk is proportional to the number of flux lines cutting the disk and the rate at which it cuts the disk. If the disk were to spin at the same rate as the permanent magnet, there would be no flux cutting the disk, no induced current flow, no electromagnet field, no torque. Thus, the disk speed will always fall behind that of the rotating permanent magnet, so that lines of flux cut the disk induce a current, create an electromagnetic field in the disk, which follows the permanent magnet. If a load is applied to the disk, slowing it, more torque will be developed as more lines of flux cut the disk. Torque is proportional to slip, the degree to which the disk falls behind the rotating magnet. More slip corresponds to more flux cutting the conductive disk, developing more torque. An analog automotive eddy current speedometer is based on the principle illustrated above. With the disk restrained by a spring, disk and needle deflection is proportional to magnet rotation rate. A rotating magnetic field is created by two coils placed at right angles to each other, driven by currents which are 90o out of phase. This should not be surprising if you are familiar with oscilloscope Lissajous patterns. Out of phase (90o) sine waves produce circular Lissajous pattern. In the figure above, a circular Lissajous is produced by driving the horizontal and vertical oscilloscope inputs with 90o out of phase sine waves. Starting at (a) with maximum “X” and minimum “Y” deflection, the trace moves up and left toward (b). Between (a) and (b) the two waveforms are equal to 0.707 Vpk at 45o. This point (0.707, 0.707) falls on the radius of the circle between (a) and (b) The trace moves to (b) with minimum “X” and maximum “Y” deflection. With maximum negative “X” and minimum “Y” deflection, the trace moves to (c). Then with minimum “X” and maximum negative “Y”, it moves to (d), and on back to (a), completing one cycle. X-axis sine and Y-axis cosine trace circle. The figure above shows the two 90o phase shifted sine waves applied to oscilloscope deflection plates which are at right angles in space. If this were not the case, a one-dimensional line would display. The combination of 90o phased sine waves and right angle deflection, results in a two-dimensional pattern– a circle. This circle is traced out by a counterclockwise rotating electron beam. For reference, the figure below shows why in-phase sine waves will not produce a circular pattern. Equal “X” and “Y” deflection moves the illuminated spot from the origin at (a) up to right (1,1) at (b), back down left to origin at (c),down left to (-1.-1) at (d), and back up right to origin. The line is produced by equal deflections along both axes; y=x is a straight line. No circular motion from in-phase waveforms. If a pair of 90o out of phase sine waves produces a circular Lissajous, a similar pair of currents should be able to produce a circular rotating magnetic field. Such is the case for a 2-phase motor. By analogy three windings placed 120o apart in space, and fed with corresponding 120o phased currents will also produce a rotating magnetic field. Rotating magnetic field from 90o phased sinewaves. As the 90o phased sinewaves, Figure above, progress from points (a) through (d), the magnetic field rotates counterclockwise (figures a-d) as follows: (a) φ-1 maximum, φ-2 zero (a’) φ-1 70%, φ-2 70% (b) φ-1 zero, φ-2 maximum (c) φ-1 maximum negative, φ-2 zero (d) φ-1 zero, φ-2 maximum negative Full Motor Speed and Synchronous Motor Speed The rotation rate of a stator rotating magnetic field is related to the number of pole pairs per stator phase. The “full speed” Figure below has a total of six poles or three pole-pairs and three phases. However,there is but one pole pair per phase– the number we need. The magnetic field will rotate once per sine wave cycle. In the case of 60 Hz power, the field rotates at 60 times per second or 3600 revolutions per minute (rpm). For 50 Hz power, it rotates at 50 rotations per second, or 3000 rpm. The 3600 and 3000 rpm, are the synchronous speed of the motor. Though the rotor of an induction motor never achieves this speed, it certainly is an upper limit. If we double the number of motor poles, the synchronous speed is cut in half because the magnetic field rotates 180o in space for 360o of electrical sine wave. Doubling the stator poles halves the synchronous speed. The synchronous speed is given by: The short explanation of the induction motor is that the rotating magnetic field produced by the stator drags the rotor around with it. The longer more correct explanation is that the stator’s magnetic field induces an alternating current into the rotor squirrel cage conductors which constitutes a transformer secondary. This induced rotor current in turn creates a magnetic field. The rotating stator magnetic field interacts with this rotor field. The rotor field attempts to align with the rotating stator field. The result is rotation of the squirrel cage rotor. If there were no mechanical motor torque load, no bearing, windage, or other losses, the rotor would rotate at the synchronous speed. However, the slip between the rotor and the synchronous speed stator field develops torque. It is the magnetic flux cutting the rotor conductors as it slips which develops torque. Thus, a loaded motor will slip in proportion to the mechanical load. If the rotor were to run at synchronous speed, there would be no stator flux cutting the rotor, no current induced in the rotor, no torque. Torque in Induction Motors When power is first applied to the motor, the rotor is at rest, while the stator magnetic field rotates at the synchronous speed Ns. The stator field is cutting the rotor at the synchronous speed Ns. The current induced in the rotor shorted turns is maximum, as is the frequency of the current, the line frequency. As the rotor speeds up, the rate at which stator flux cuts the rotor is the difference between synchronous speed Nsand actual rotor speed N, or (Ns - N). The ratio of actual flux cutting the rotor to synchronous speed is defined as slip: The frequency of the current induced into the rotor conductors is only as high as the line frequency at motor start, decreasing as the rotor approaches synchronous speed. Rotor frequency is given by: Slip at 100% torque is typically 5% or less in induction motors. Thus for f = 50 Hz line frequency, the frequency of the induced current in the rotor fr = 0.05·50 = 2.5 Hz. Why is it so low? The stator magnetic field rotates at 50 Hz. The rotor speed is 5% less. The rotating magnetic field is only cutting the rotor at 2.5 Hz. The 2.5 Hz is the difference between the synchronous speed and the actual rotor speed. If the rotor spins a little faster, at the synchronous speed, no flux will cut the rotor at all, fr = 0. Torque and speed vs %Slip. %Ns=%Synchronous Speed. The Figure above graph shows that starting torque known as locked rotor torque (LRT) is higher than 100% of the full load torque (FLT), the safe continuous torque rating. The locked rotor torque is about 175% of FLT for the example motor graphed above. Starting current known as locked rotor current (LRC) is 500% of full load current (FLC), the safe running current. The current is high because this is analogous to a shorted secondary on a transformer. As the rotor starts to rotate the torque may decrease a bit for certain classes of motors to a value known as the pull up torque. This is the lowest value of torque ever encountered by the starting motor. As the rotor gains 80% of synchronous speed, torque increases from 175% up to 300% of the full load torque. This breakdown torque is due to the larger than normal 20% slip. The current has decreased only slightly at this point, but will decrease rapidly beyond this point. As the rotor accelerates to within a few percent of synchronous speed, both torque and current will decrease substantially. Slip will be only a few percent during normal operation. For a running motor, any portion of the torque curve below 100% rated torque is normal. The motor load determines the operating point on the torque curve. While the motor torque and current may exceed 100% for a few seconds during starting, continuous operation above 100% can damage the motor. Any motor torque load above the breakdown torque will stall the motor. The torque, slip, and current will approach zero for a “no mechanical torque” load condition. This condition is analogous to an open secondary transformer. There are several basic induction motor designs (Figure below) showing consideable variation from the torque curve above. The different designs are optimized for starting and running different types of loads. The locked rotor torque (LRT) for various motor designs and sizes ranges from 60% to 350% of full load torque (FLT). Starting current or locked rotor current (LRC) can range from 500% to 1400% of full load current (FLC). This current draw can present a starting problem for large induction motors. NEMA and IEC Motor Classes Various standard classes (or designs) for motors, corresponding to the torque curves (Figure below) have been developed to better drive various type loads. The National Electrical Manufacturers Association (NEMA) has specified motor classes A, B, C, and D to meet these drive requirements. Similar International Electrotechnical Commission (IEC) classes N and H correspond to NEMA B and C designs respectively. Characteristics for NEMA designs. All motors, except class D, operate at %5 slip or less at full load. • Class B (IEC Class N) motors are the default motor to use in most applications. With a starting torque of LRT = 150% to 170% of FLT, it can start most loads, without excessive starting current (LRT). Efficiency and power factor are high. It typically drives pumps, fans, and machine tools. • Class A starting torque is the same as class B. Drop out torque and starting current (LRT)are higher. This motor handles transient overloads as encountered in injection molding machines. • Class C (IEC Class H) has higher starting torque than class A and B at LRT = 200% of FLT. This motor is applied to hard-starting loads which need to be driven at constant speed like conveyors, crushers, and reciprocating pumps and compressors. • Class D motors have the highest starting torque (LRT) coupled with low starting current due to high slip ( 5% to 13% at FLT). The high slip results in lower speed. Speed regulation is poor. However, the motor excels at driving highly variable speed loads like those requiring an energy storage flywheel. Applications include punch presses, shears, and elevators. • Class E motors are a higher efficiency version of class B. • Class F motors have much lower LRC, LRT, and break down torque than class B. They drive constant easily started loads. Power Factor in Induction Motors Induction motors present a lagging (inductive) power factor to the power line.The power factor in large fully loaded high speed motors can be as favorable as 90% for large high speed motors. At 3/4 full load the largest high speed motor power factor can be 92%. The power factor for small low speed motors can be as low as 50%. At starting, the power factor can be in the range of 10% to 25%, rising as the rotor achieves speed. Power factor (PF) varies considerably with the motor mechanical load (Figure below). An unloaded motor is analogous to a transformer with no resistive load on the secondary. Little resistance is reflected from the secondary (rotor) to the primary (stator). Thus the power line sees a reactive load, as low as 10% PF. As the rotor is loaded an increasing resistive component is reflected from rotor to stator, increasing the power factor. Induction motor power factor and efficiency. Efficiency in Induction Motors Large three phase motors are more efficient than smaller 3-phase motors, and most all single phase motors. Large induction motor efficiency can be as high as 95% at full load, though 90% is more common. Efficiency for a lightly load or no-loaded induction motor is poor because most of the current is involved with maintaining magnetizing flux. As the torque load is increased, more current is consumed in generating torque, while current associated with magnetizing remains fixed. Efficiency at 75% FLT can be slightly higher than that at 100% FLT. Efficiency is decreased a few percent at 50% FLT, and decreased a few more percent at 25% FLT. Efficiency only becomes poor below 25% FLT. The variation of efficiency with loading is shown in Figure above Induction motors are typically oversized to guarantee that their mechanical load can be started and driven under all operating conditions. If a polyphase motor is loaded at less than 75% of rated torque where efficiency peaks, efficiency suffers only slightly down to 25% FLT. Nola Power Factor Corrector Frank Nola of NASA proposed a power factor corrector (PFC) as an energy saving device for single phase induction motors in the late 1970’s. It is based on the premise that a less than fully loaded induction motor is less efficient and has a lower power factor than a fully loaded motor. Thus, there is energy to be saved in partially loaded motors, 1-φ motors in particular. The energy consumed in maintaining the stator magnetic field is relatively fixed with respect to load changes. While there is nothing to be saved in a fully loaded motor, the voltage to a partially loaded motor may be reduced to decrease the energy required to maintain the magnetic field. This will increase power factor and efficiency. This was a good concept for the notoriously inefficient single phase motors for which it was intended. This concept is not very applicable to large 3-phase motors. Because of their high efficiency (90%+), there is not much energy to be saved. Moreover, a 95% efficient motor is still 94% efficient at 50% full load torque (FLT) and 90% efficient at 25% FLT. The potential energy savings in going from 100% FLT to 25% FLT is the difference in efficiency 95% - 90% = 5%. This is not 5% of the full load wattage but 5% of the wattage at the reduced load. The Nola power factor corrector might be applicable to a 3-phase motor which idles most of the time (below 25% FLT), like a punch press. The pay-back period for the expensive electronic controller has been estimated to be unattractive for most applications. Though, it might be economical as part of an electronic motor starter or speed Control. [7] Induction Motors as Alternators An induction motor may function as an alternator if it is driven by a torque at greater than 100% of the synchronous speed. (Figure below) This corresponds to a few % of “negative” slip, say -1% slip. This means that as we are rotating the motor faster than the synchronous speed, the rotor is advancing 1% faster than the stator rotating magnetic field. It normally lags by 1% in a motor. Since the rotor is cutting the stator magnetic field in the opposite direction (leading), the rotor induces a voltage into the stator feeding electrical energy back into the power line. Negative torque makes induction motor into generator. Such an induction generator must be excited by a “live” source of 50 or 60 Hz power. No power can be generated in the event of a power company power failure. This type of alternator appears to be unsuited as a standby power source. As an auxiliary power wind turbine generator, it has the advantage of not requiring an automatic power failure disconnect switch to protect repair crews. It is fail-safe. Small remote (from the power grid) installations may be make self-exciting by placing capacitors in parallel with the stator phases. If the load is removed residual magnetism may generate a small amount of current flow. This current is allowed to flow by the capacitors without dissipating power. As the generator is brought up to full speed, the current flow increases to supply a magnetizing current to the stator. The load may be applied at this point. Voltage regulation is poor. An induction motor may be converted to a self-excited generator by the addition of capacitors.[6] Start up procedure is to bring the wind turbine up to speed in motor mode by application of normal power line voltage to the stator. Any wind induced turbine speed in excess of synchronous speed will develop negative torque, feeding power back into the power line, reversing the normal direction of the electric kilowatt-hour meter. Whereas an induction motor presents a lagging power factor to the power line, an induction alternator presents a leading power factor. Induction generators are not widely used in conventional power plants. The speed of the steam turbine drive is steady and controllable as required by synchronous alternators. Synchronous alternators are also more efficient. The speed of a wind turbine is difficult to control and subject to wind speed variation by gusts. An induction alternator is better able to cope with these variations due to the inherent slip. This stresses the gear train and mechanical components less than a synchronous generator. However, this allowable speed variation only amounts to about 1%. Thus, a direct line connected induction generator is considered to be fixed-speed in a wind turbine. See Doubly-fed induction generator for a true variable speed alternator. Multiple generators or multiple windings on a common shaft may be switched to provide a high and low speed to accomodate variable wind conditions. Motor Starting and Speed Control Some induction motors can draw over 1000% of full load current during starting; though, a few hundred percent is more common. Small motors of a few kilowatts or smaller can be started by direct connection to the power line. Starting larger motors can cause line voltage sag, affecting other loads. Motor-start rated circuit breakers (analogous to slow blow fuses) should replace standard circuit breakers for starting motors of a few kilowatts. This breaker accepts high over-current for the duration of starting. Autotransformer induction motor starter. Motors over 50 kW use motor starters to reduce line current from several hundred to a few hundred percent of full load current. An intermittent duty autotransformer may reduce the stator voltage for a fraction of a minute during the start interval, followed by application of full line voltage as in the figure above. Closure of the S contacts applies reduced voltage during the start interval. The S contacts open and the R contacts close after starting. This reduces starting current to, say, 200% of full load current. Since the autotransformer is only used for the short start interval, it may be sized considerably smaller than a continuous duty unit. Running Three-Phase Motors on Single-Phase Provisions Three-phase motors will run on single phase as readily as single phase motors. The only problem for either motor is starting. Sometimes 3-phase motors are purchased for use on single phase if three-phase provisioning is anticipated. The power rating needs to be 50% larger than for a comparable single phase motor to make up for one unused winding. Single phase is applied to a pair of windings simultaneous with a start capacitor in series with the third winding. The start switch is opened in the figure below upon motor start. Sometimes a smaller capacitor than the start capacitor is retained while running. Starting a three-phase motor on single phase. The circuit in the figure above for running a three-phase motor on single phase is known as a static phase converter if the motor shaft is not loaded. Moreover, the motor acts as a 3-phase generator. Three phase power may be tapped off from the three stator windings for powering other 3-phase equipment. The capacitor supplies a synthetic phase approximately midway ∠90o between the ∠180o single phase power source terminals for starting. While running, the motor generates approximately standard 3-φ, as shown in the figure above. Matt Isserstedt shows a complete design for powering a home machine shop. [8] Self-starting static phase converter. Run capacitor = 25-30µF per HP. Adapted from Figure 7, Hanrahan [9] Since a static phase converter has no torque load, it may be started with a capacitor considerably smaller than a normal start capacitor. If it is small enough, it may be left in circuit as a run-capacitor. See the figure above. However, smaller run-capacitors result in better 3-phase power output as in the figure below. Moreover, adjustment of these capacitors to equalize the currents as measured in the three phases results in the most efficient machine.[9] However, a large start capacitor is required for about a second to quickly start the converter. Hanrahan provides construction details.[9] More efficient static phase converter. Start capacitor = 50-100µF/HP. Run capacitors = 12-16µF/HP. Adapted from Figure 1, Hanrahan [9] Induction Motors with Multiple Fields Induction motors may contain multiple field windings, for example a 4-pole and an 8-pole winding corresponding to 1800 and 900 rpm synchronous speeds. Energizing one field or the other is less complex than rewiring the stator coils in the figure below. Multiple fields allow speed change. If the field is segmented with leads brought out, it may be rewired (or switched) from 4-pole to 2-pole as shown above for a 2-phase motor. The 22.5o segments are switchable to 45o segments. Only the wiring for one phase is shown above for clarity. Thus, our induction motor may run at multiple speeds. When switching the above 60 Hz motor from 4 poles to 2 poles the synchronous speed increases from 1800 rpm to 3600 rpm. If the motor is driven by 50 Hz, what would be the corresponding 4-pole and 2-pole synchronous speeds? Induction Motors with Variable Voltage The speed of small squirrel cage induction motors for applications such as driving fans may be changed by reducing the line voltage. This reduces the torque available to the load which reduces the speed (see figure below). Variable voltage controls induction motor speed. Electronic Speed Control in Induction Motors Modern solid-state electronics increase the options for speed control. By changing the 50 or 60 Hz line frequency to higher or lower values, the synchronous speed of the motor may be changed. However, decreasing the frequency of the current fed to the motor also decreases reactance XL which increases the stator current. This may cause the stator magnetic circuit to saturate with disastrous results. In practice, the voltage to the motor needs to be decreased when frequency is decreased. Electronic variable speed drive. Conversely, the drive frequency may be increased to increase the synchronous speed of the motor. However, the voltage needs to be increased to overcome increasing reactance to keep current up to a normal value and maintain torque. The inverter (Figure ) approximates sinewaves to the motor with pulse width modulation outputs. This is a chopped waveform which is either on or off, high or low, the percentage of “on” time corresponds to the instantaneous sine wave voltage. Once electronics is applied to induction motor control, many control methods are available, varying from the simple to complex: • Scaler Control Low cost method described above to control only voltage and frequency, without feedback. • Vector Control Also known as vector phase control. The flux and torque producing components of stator current are measured or estimated on a real-time basis to enhance the motor torque-speed curve. This is computation intensive. • Direct Torque Control An elaborate adaptive motor model allows more direct control of flux and torque without feedback. This method quickly responds to load changes. Tesla Polyphase Induction Motors Summary • A polyphase induction motor consists of a polyphase winding embedded in a laminated stator and a conductive squirrel cage embedded in a laminated rotor. • Three phase currents flowing within the stator create a rotating magnetic field which induces a current, and consequent magnetic field in the rotor. Rotor torque is developed as the rotor slips a little behind the rotating stator field. • Unlike single phase motors, polyphase induction motors are self-starting. • Motor starters minimize loading of the power line while providing a larger starting torque than required during running. Line current reducing starters are only required for large motors. • Three phase motors will run on single phase, if started. • A static phase converter is three phase motor running on single phase having no shaft load, generating a 3-phase output. • Multiple field windings can be rewired for multiple discrete motor speeds by changing the number of poles. Linear Induction Motors The wound stator and the squirrel cage rotor of an induction motor may be cut at the circumference and unrolled into a linear induction motor. The direction of linear travel is controlled by the sequence of the drive to the stator phases. The linear induction motor has been proposed as a drive for high-speed passenger trains. Up to this point, the linear induction motor with the accompanying magnetic repulsion levitation system required for a smooth ride has been too costly for all but experimental installations. However, the linear induction motor is scheduled to replace steam driven catapult aircraft launch systems on the next generation of naval aircraft carrier, CVNX-1, in 2013. This will increase efficiency and reduce maintenance.[4][5]
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/13%3A_AC_Motors/13.07%3A_Tesla_Polyphase_Induction_Motors.txt
A wound rotor induction motor has a stator like the squirrel cage induction motor, but a rotor with insulated windings brought out via slip rings and brushes. However, no power is applied to the slip rings. Their sole purpose is to allow resistance to be placed in series with the rotor windings while starting. (Figure below) This resistance is shorted out once the motor is started to make the rotor look electrically like the squirrel cage counterpart. Wound rotor induction motor. Why put resistance in series with the rotor? Squirrel cage induction motors draw 500% to over 1000% of full load current (FLC) during starting. While this is not a severe problem for small motors, it is for large (10’s of kW) motors. Placing resistance in series with the rotor windings not only decreases start current, locked rotor current (LRC), but also increases the starting torque, locked rotor torque (LRT). Figure below shows that by increasing the rotor resistance from R0 to R1 to R2, the breakdown torque peak is shifted left to zero speed.Note that this torque peak is much higher than the starting torque available with no rotor resistance (R0) Slip is proportional to rotor resistance, and pullout torque is proportional to slip. Thus, high torque is produced while starting. Breakdown torque peak is shifted to zero speed by increasing rotor resistance. The resistance decreases the torque available at full running speed. But that resistance is shorted out by the time the rotor is started. A shorted rotor operates like a squirrel cage rotor. Heat generated during starting is mostly dissipated external to the motor in the starting resistance. The complication and maintenance associated with brushes and slip rings is a disadvantage of the wound rotor as compared to the simple squirrel cage rotor. This motor is suited for starting high inertial loads. A high starting resistance makes the high pull out torque available at zero speed. For comparison, a squirrel cage rotor only exhibits pull out (peak) torque at 80% of its’ synchronous speed. Speed control Motor speed may be varied by putting variable resistance back into the rotor circuit. This reduces rotor current and speed. The high starting torque available at zero speed, the down shifted break down torque, is not available at high speed. See R2 curve at 90% Ns, Figure below. Resistors R0R1R2R3 increase in value from zero. A higher resistance at R3 reduces the speed further. Speed regulation is poor with respect to changing torque loads. This speed control technique is only usefull over a range of 50% to 100% of full speed. Speed control works well with variable speed loads like elevators and printing presses. Rotor resistance controls speed of wound rotor induction motor. Doubly-fed induction generator We previously described a squirrel cage induction motor acting like a generator if driven faster than the synchronous speed. (See Induction motor alternator) This is a singly-fed induction generator, having electrical connections only to the stator windings. A wound rotor induction motor may also act as a generator when driven above the synchronous speed. Since there are connections to both the stator and rotor, such a machine is known as a doubly-fed induction generator (DFIG). Rotor resistance allows over-speed of doubly-fed induction generator. The singly-fed induction generator only had a usable slip range of 1% when driven by troublesome wind torque. Since the speed of a wound rotor induction motor may be controlled over a range of 50-100% by inserting resistance in the rotor, we may expect the same of the doubly-fed induction generator. Not only can we slow the rotor by 50%, we can also overspeed it by 50%. That is, we can vary the speed of a doubly fed induction generator by ±50% from the synchronous speed. In actual practice, ±30% is more practical. If the generator over-speeds, resistance placed in the rotor circuit will absorb excess energy while the stator feeds constant 60 Hz to the power line. (Figure above) In the case of under-speed, negative resistance inserted into the rotor circuit can make up the energy deficit, still allowing the stator to feed the power line with 60 Hz power. Converter recovers energy from rotor of doubly-fed induction generator. In actual practice, the rotor resistance may be replaced by a converter (Figure above) absorbing power from the rotor, and feeding power into the power line instead of dissipating it. This improves the efficiency of the generator. Converter borrows energy from power line for rotor of doubly fed induction generator, allowing it to function well under synchronous speed. The converter may “borrow” power from the line for the under-speed rotor, which passes it on to the stator. (Figure above) The borrowed power, along with the larger shaft energy, passes to the stator which is connected to the power line. The stator appears to be supplying 130% of power to the line. Keep in mind that the rotor “borrows” 30%, leaving, leaving the line with 100% for the theoretical lossless DFIG.​​​​​​​ Wound rotor induction motor qualities. • Excellent starting torque for high inertia loads. • Low starting current compared to squirrel cage induction motor. • Speed is resistance variable over 50% to 100% full speed. • Higher maintenance of brushes and slip rings compared to squirrel cage motor. • The generator version of the wound rotor machine is known as a doubly-fed induction generator, a variable speed machine.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/13%3A_AC_Motors/13.08%3A_Wound_Rotor_Induction_Motors.txt
A three phase motor may be run from a single phase power source. (Figure below) However, it will not self-start. It may be hand started in either direction, coming up to speed in a few seconds. It will only develop 2/3 of the 3-φ power rating because one winding is not used. 3-φmotor runs from 1-φ power but does not start. Single Coil of a Single Phase Motor The single coil of a single phase induction motor does not produce a rotating magnetic field, but a pulsating field reaching maximum intensity at 0o and 180o electrical. (Figure below) Single phase stator produces a nonrotating, pulsating magnetic field. Another view is that the single coil excited by a single phase current produces two counter rotating magnetic field phasors, coinciding twice per revolution at 0o (Figure above-a) and 180o (figure e). When the phasors rotate to 90o and -90o they cancel in figure b. At 45o and -45o (figure c) they are partially additive along the +x axis and cancel along the y axis. An analogous situation exists in figure d. The sum of these two phasors is a phasor stationary in space, but alternating polarity in time. Thus, no starting torque is developed. However, if the rotor is rotated forward at a bit less than the synchronous speed, It will develop maximum torque at 10% slip with respect to the forward rotating phasor. Less torque will be developed above or below 10% slip. The rotor will see 200% - 10% slip with respect to the counter rotating magnetic field phasor. Little torque (see torque vs slip curve) other than a double freqency ripple is developed from the counter rotating phasor. Thus, the single phase coil will develop torque, once the rotor is started. If the rotor is started in the reverse direction, it will develop a similar large torque as it nears the speed of the backward rotating phasor. Single phase induction motors have a copper or aluminum squirrel cage embedded in a cylinder of steel laminations, typical of poly-phase induction motors. Permanent-Split Capacitor Motor One way to solve the single phase problem is to build a 2-phase motor, deriving 2-phase power from single phase. This requires a motor with two windings spaced apart 90o electrical, fed with two phases of current displaced 90o in time. This is called a permanent-split capacitor motor in Figure below. Permanent-split capacitor induction motor. This type of motor suffers increased current magnitude and backward time shift as the motor comes up to speed, with torque pulsations at full speed. The solution is to keep the capacitor (impedance) small to minimize losses. The losses are less than for a shaded pole motor. This motor configuration works well up to 1/4 horsepower (200watt), though, usually applied to smaller motors. The direction of the motor is easily reversed by switching the capacitor in series with the other winding. This type of motor can be adapted for use as a servo motor, described elsewhere is this chapter. Single phase induction motor with embedded stator coils. Single phase induction motors may have coils embedded into the stator as shown in Figure above for larger size motors. Though, the smaller sizes use less complex to build concentrated windings with salient poles. Capacitor-Start Induction Motor In Figure below a larger capacitor may be used to start a single phase induction motor via the auxiliary winding if it is switched out by a centrifugal switch once the motor is up to speed. Moreover, the auxiliary winding may be many more turns of heavier wire than used in a resistance split-phase motor to mitigate excessive temperature rise. The result is that more starting torque is available for heavy loads like air conditioning compressors. This motor configuration works so well that it is available in multi-horsepower (multi-kilowatt) sizes. Capacitor-start induction motor. Capacitor-Run Motor Induction Motor A variation of the capacitor-start motor (Figure below) is to start the motor with a relatively large capacitor for high starting torque, but leave a smaller value capacitor in place after starting to improve running characteristics while not drawing excessive current. The additional complexity of the capacitor-run motor is justified for larger size motors. Capacitor-run motor induction motor. A motor starting capacitor may be a double-anode non-polar electrolytic capacitor which could be two + to + (or - to -) series connected polarized electrolytic capacitors. Such AC rated electrolytic capacitors have such high losses that they can only be used for intermittent duty (1 second on, 60 seconds off) like motor starting. A capacitor for motor running must not be of electrolytic construction, but a lower loss polymer type. Resistance Split-Phase Motor Induction Motor If an auxiliary winding of much fewer turns of smaller wire is placed at 90o electrical to the main winding, it can start a single phase induction motor. (Figure below) With lower inductance and higher resistance, the current will experience less phase shift than the main winding. About 30o of phase difference may be obtained. This coil produces a moderate starting torque, which is disconnected by a centrifugal switch at 3/4 of synchronous speed. This simple (no capacitor) arrangement serves well for motors up to 1/3 horsepower (250 watts) driving easily started loads. Resistance split-phase motor induction motor. This motor has more starting torque than a shaded pole motor (next section), but not as much as a two phase motor built from the same parts. The current density in the auxiliary winding is so high during starting that the consequent rapid temperature rise precludes frequent restarting or slow starting loads. Nola Power Factor Corrector Frank Nola of NASA proposed a power factor corrector for improving the efficiency of AC induction motors in the mid-1970’s. It is based on the premise that induction motors are inefficient at less than full load. This inefficiency correlates with a low power factor. The less than unity power factor is due to magnetizing current required by the stator. This fixed current is a larger proportion of total motor current as motor load is decreased. At light load, the full magnetizing current is not required. It could be reduced by decreasing the applied voltage, improving the power factor and efficiency. The power factor corrector senses power factor, and decreases motor voltage, thus restoring a higher power factor and decreasing losses. Since single-phase motors are about 2 to 4 times as inefficient as three-phase motors, there is potential energy savings for 1-φ motors. There is no savings for a fully loaded motor since all the stator magnetizing current is required. The voltage cannot be reduced. But there is potential savings from a less than fully loaded motor. A nominal 117 VAC motor is designed to work at as high as 127 VAC, as low as 104 VAC. That means that it is not fully loaded when operated at greater than 104 VAC, for example, a 117 VAC refrigerator. It is safe for the power factor controller to lower the line voltage to 104-110 VAC. The higher the initial line voltage, the greater the potential savings. Of course, if the power company delivers closer to 110 VAC, the motor will operate more efficiently without any add-on device. Any substantially idle, 25% FLC or less, single phase induction motor is a candidate for a PFC. Though, it needs to operate a large number of hours per year. And the more time it idles, as in a lumber saw, punch press, or conveyor, the greater the possibility of paying for the controller in a few years operation. It should be easier to pay for it by a factor of three as compared to the more efficient 3-φ-motor. The cost of a PFC cannot be recovered for a motor operating only a few hours per day. [7] Summary: Single-phase induction motors • Single-phase induction motors are not self-starting without an auxiliary stator winding driven by an out of phase current of near 90o. Once started the auxiliary winding is optional. • The auxiliary winding of a permanent-split capacitor motor has a capacitor in series with it during starting and running. • A capacitor-start induction motoronly has a capacitor in series with the auxiliary winding during starting. • A capacitor-run motor typically has a large non-polarized electrolytic capacitor in series with the auxiliary winding for starting, then a smaller non-electrolytic capacitor during running. • The auxiliary winding of a resistance split-phase motor develops a phase difference versus the main winding during starting by virtue of the difference in resistance.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/13%3A_AC_Motors/13.09%3A_Single-phase_Induction_Motors.txt
Shaded pole induction motor An easy way to provide starting torque to a single phase motor is to embed a shorted turn in each pole at 30o to 60o to the main winding. (Figure below) Typically 1/3 of the pole is enclosed by a bare copper strap. These shading coils produce a time lagging damped flux spaced 30o to 60o from the main field. This lagging flux with the undamped main component, produces a rotating field with a small torque to start the rotor. Shaded pole induction motor, (a) dual coil design, (b) smaller single coil version Starting torque is so low that shaded pole motors are only manufactured in smaller sizes, below 50 watts. Low cost and simplicity suit this motor to small fans, air circulators, and other low torque applications. Motor speed can be lowered by switching reactance in series to limit current and torque, or by switching motor coil taps as in Figure below. Speed control of shaded pole motor. 2-phase servo motor A servo motor is typically part of a feedback loop containing electronic, mechanical, and electrical components. The servo loop is a means of controlling the motion of an object via the motor. A requirement of many such systems is fast response. To reduce acceleration robbing inertial, the iron core is removed from the rotor leaving only a shaft mounted aluminum cup to rotate. (Figure below) The iron core is reinserted within the cup as a static (non-rotating) component to complete the magnetic circuit. Otherwise, the construction is typical of a two phase motor. The low mass rotor can accelerate more rapidly than a squirrel cage rotor. High acceleration 2-φ AC servo motor. One phase is connected to the single phase line; the other is driven by an amplifier. One of the windings is driven by a 90o phase shifted waveform. In the above figure, this is accomplished by a series capacitor in the power line winding. The other winding is driven by a variable amplitude sine wave to control motor speed. The phase of the waveform may invert (180o phase shift) to reverse the direction of the motor. This variable sine wave is the output of an error amplifier. See synchro CT section for example. Aircraft control surfaces may be positioned by 400 Hz 2-φ servo motors. Hysteresis motor If the low hysteresis Si-steel laminated rotor of an induction motor is replaced by a slotless windingless cylinder of hardened magnet steel, hysteresis, or lagging behind of rotor magnetization, is greatly accentuated. The resulting low torque synchronous motor develops constant torque from stall to synchronous speed. Because of the low torque, the hysteresis motor is only available in very small sizes, and is only used for constant speed applications like clock drives, and formerly, phonograph turntables. Eddy current clutch If the stator of an induction motor or a synchronous motor is mounted to rotate independently of the rotor, an eddy current clutch results. The coils are excited with DC and attached to the mechanical load. The squirrel cage rotor is attached to the driving motor. The drive motor is started with no DC excitation to the clutch. The DC excitation is adjusted from zero to the desired final value providing a continuously and smoothly variable torque. The operation of the eddy current clutch is similar to an analog eddy current automotive speedometer. Summary: Other specialized motors • The shaded pole induction motor, used in under 50 watt low torque applications, develops a second phase from shorted turns in the stator. • Hysteresis motors are a small low torque synchronous motor once used in clocks and phonographs. • The eddy current clutch provides an adjustable torque.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/13%3A_AC_Motors/13.10%3A_Other_Specialized_Motors.txt
Normally, the rotor windings of a wound rotor induction motor are shorted out after starting. During starting, resistance may be placed in series with the rotor windings to limit starting current. If these windings are connected to a common starting resistance, the two rotors will remain synchronized during starting. (Figure below) This is useful for printing presses and draw bridges, where two motors need to be synchronized during starting. Once started, and the rotors are shorted, the synchronizing torque is absent. The higher the resistance during starting, the higher the synchronizing torque for a pair of motors. If the starting resistors are removed, but the rotors still paralleled, there is no starting torque. However there is a substantial synchronizing torque. This is a selsyn, which is an abbreviation for “self synchronous”. Starting wound rotor induction motors from common resistors. The rotors may be stationary. If one rotor is moved through an angle θ, the other selsyn shaft will move through an angle θ. If drag is applied to one selsyn, this will be felt when attempting to rotate the other shaft. While multi-horsepower (multi-kilowatt) selsyns exist, the main application is small units of a few watts for instrumentation applications– remote position indication. Selsyns without starting resistance. Instrumentation selsyns have no use for starting resistors. (Figure above) They are not intended to be self rotating. Since the rotors are not shorted out nor resistor loaded, no starting torque is developed. However, manual rotation of one shaft will produce an unbalance in the rotor currents until the parallel unit’s shaft follows. Note that a common source of three phase power is applied to both stators. Though we show three phase rotors above, a single phase powered rotor is sufficient as shown in Figure below. Transmitter - receiver Small instrumentation selsyns, also known as sychros, use single phase paralleled, AC energized rotors, retaining the 3-phase paralleled stators, which are not externally energized. (Figure below) Synchros function as rotary transformers. If the rotors of both the torque transmitter (TX) and torque receiver (RX) are at the same angle, the phases of the induced stator voltages will be identical for both, and no current will flow. Should one rotor be displaced from the other, the stator phase voltages will differ between transmitter and receiver. Stator current will flow developing torque. The receiver shaft is electrically slaved to the transmitter shaft. Either the transmitter or receiver shaft may be rotated to turn the opposite unit. Synchros have single phase powered rotors. Synchro stators are wound with 3-phase windings brought out to external terminals. The single rotor winding of a torque transmitter or receiver is brough out by brushed slip rings. Synchro transmitters and receivers are electrically identical. However, a synchro receiver has inertial damping built in. A synchro torque transmitter may be substituted for a torque receiver. Remote position sensing is the main synchro application. (Figure below) For example, a synchro transmitter coupled to a radar antenna indicates antenna position on an indicator in a control room. A synchro transmitter coupled to a weather vane indicates wind direction at a remote console. Synchros are available for use with 240 Vac 50 Hz, 115 Vac 60 Hz, 115 Vac 400 Hz, and 26 Vac 400 Hz power. Synchro application: remote position indication. Differential transmitter - receiver A synchro differential transmitter (TDX) has both a three phase rotor and stator. (Figure below) A synchro differential transmitter adds a shaft angle input to an electrical angle input on the rotor inputs, outputting the sum on the stator outputs. This stator electrical angle may be displayed by sending it to an RX. For example, a synchro receiver displays the position of a radar antenna relative to a ship’s bow. The addition of a ship’s compass heading by a synchro differential transmitter, displays antenna postion on an RX relative to true north, regardless of ship’s heading. Reversing the S1-S3 pair of stator leads between a TX and TDX subtracts angular positions. Torque differential transmitter (TDX). A shipboard radar antenna coupled to a synchro transmitter encodes the antenna angle with respect to ship’s bow. (Figure below) It is desired to display the antenna position with respect to true north. We need to add the ships heading from a gyrocompass to the bow-relative antenna position to display antenna angle with respect to true north. ∠antenna + ∠gyro Torque differential transmitter application: angular addition. ∠antenna-N = ∠antenna + ∠gyro ∠rx = ∠tx + ∠gy For example, ship’s heading is ∠30o, antenna position relative to ship’s bow is ∠0o, ∠antenna-N is: ∠rx = ∠tx + ∠gy ∠30o = ∠30o + ∠0o Example, ship’s heading is ∠30o, antenna position relative to ship’s bow is ∠15o, ∠antenna-N is: ∠45o = ∠30o + ∠15o Addition vs subtraction For reference we show the wiring diagrams for subtraction and addition of shaft angles using both TDX’s (Torque Differential transmitter) and TDR’s (Torque Differential Receiver). The TDX has a torque angle input on the shaft, an electrical angle input on the three stator connections, and an electrical angle output on the three rotor connections. The TDR has electrical angle inputs on both the stator and rotor. The angle output is a torque on the TDR shaft. The difference between a TDX and a TDR is that the TDX is a torque transmitter and the TDR a torque receiver. TDX subtraction. The torque inputs in Figure above are TX and TDX. The torque output angular difference is TR. TDX Addition. The torque inputs in Figure above are TX and TDX. The torque output angular sum is TR. TDR subtraction. The torque inputs in Figure above are TX1 and TX2. The torque output angular difference is TDR. TDR addition. The torque inputs in Figure above are TX1 and TX2. The torque output angular sum is TDR. Control transformer A variation of the synchro transmitter is the control transformer. It has three equally spaced stator windings like a TX. Its rotor is wound with more turns than a transmitter or receiver to make it more sensitive at detecting a null as it is rotated, typically, by a servo system. The CT (Control Transformer) rotor output is zero when it is oriented at a angle right angle to the stator magnetic field vector. Unlike a TX or RX, the CT neither transmits nor receives torque. It is simply a sensitive angular position detector. Control transformer (CT) detects servo null. In Figure above, the shaft of the TX is set to the desired position of the radar antenna. The servo system will cause the servo motor to drive the antenna to the commanded position. The CT compares the commanded to actual position and signals the servo amplifier to drive the motor until that commanded angle is achieved. Servo uses CT to sense antenna position null When the control transformer rotor detects a null at 90o to the axis of the stator field, there is no rotor output. Any rotor displacement produces an AC error voltage proportional to displacement. A servo (Figure above) seeks to minimize the error between a commanded and measured variable due to negative feedback. The control transformer compares the shaft angle to the the stator magnetic field angle, sent by the TX stator. When it measures a minimum, or null, the servo has driven the antenna and control transformer rotor to the commanded position. There is no error between measured and commanded position, no CT, control transformer, output to be amplified. The servo motor, a 2-phase motor, stops rotating. However, any CT detected error drives the amplifier which drives the motor until the error is minimized. This corresponds to the servo system having driven the antenna coupled CT to match the angle commanded by the TX. The servo motor may drive a reduction gear train and be large compared to the TX and CT synchros. However, the poor efficiency of AC servo motors limits them to smaller loads. They are also difficult to control since they are constant speed devices. However, they can be controlled to some extent by varying the voltage to one phase with line voltage on the other phase. Heavy loads are more efficiently driven by large DC servo motors. Airborne applications use 400Hz components– TX, CT, and servo motor. Size and weight of the AC magnetic components is inversely proportional to frequency. Therefore, use of 400 Hz components for aircraft applications, like moving control surfaces, saves size and weight. Resolver A resolver (Figure below) has two stator winding placed at 90o to each other, and a single rotor winding driven by alternating current. A resolver is used for polar to rectangular conversion. An angle input at the rotor shaft produces rectangular co-ordinates sinθ and cosθ proportional voltages on the stator windings. Resolver converts shaft angle to sine and cosine of angle. For example, a black-box within a radar encodes the distance to a target as a sine wave proportional voltage V, with the bearing angle as a shaft angle. Convert to X and Y co-ordinates. The sine wave is fed to the rotor of a resolver. The bearing angle shaft is coupled to the resolver shaft. The coordinates (X, Y) are available on the resolver stator coils: The Cartesian coordinates (X, Y) may be plotted on a map display. A TX (torque transmitter) may be adapted for service as a resolver. (Figure below) Scott-T converts 3-φ to 2-φ enabling TX to perform resolver function. It is possible to derive resolver-like quadrature angular components from a synchro transmitter by using a Scott-T transformer. The three TX outputs, 3-phases, are processed by a Scott-T transformer into a pair of quadrature components. See Scott-T chapter 9 for details. There is also a linear version of the resolver known as an inductosyn. The rotary version of the inductosynhas a finer resolution than a resolver. Summary: Selsyn (synchro) motors • A synchro, also known as a selsyn, is a rotary transformer used to transmit shaft torque. • A TX, torque transmitter, accepts a torque input at its shaft for transmission on three-phase electrical outputs. • An RX, torque receiver, accepts a three-phase electrical representation of an angular input for conversion to a torque output at its shaft. Thus, TX transmits a torque form an input shaft to a remote RX output shaft. • A TDX, torque differential transmitter, sums an electrical angle input with a shaft angle input producing an electrical angle output • A TDR, torque differential receiver, sums two electrical angle inputs producing a shaft angle output • A CT, control transformer, detects a null when the rotor is positioned at a right angle to the stator angle input. A CT is typically a component of a servo– feedback system. • A Resolver outputs a quadrature sinθ and cosine(theta) representation of the shaft angle input instead of a three-phase output. • The three-phase output of a TX is converted to a resolver style output by a Scott-T transformer.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/13%3A_AC_Motors/13.11%3A_Selsyn_%28Synchro%29_Motors.txt
Charles Proteus Steinmetz’s first job after arriving in America was to investigate problems encountered in the design of the alternating current version of the brushed commutator motor. The situation was so bad that motors could not be designed ahead of the actual construction. The success or failure of a motor design was not known until after it was actually built at great expense and tested. He formulated the laws of magnetic hysteresis in finding a solution. Hysteresis is a lagging behind of the magnetic field strength as compared to the magnetizing force. This produces a loss not present in DC magnetics. Low hysteresis alloys and breaking the alloy into thin insulated laminations made it possible to accurately design AC commutator motors before building. AC commutator motors, like comparable DC motors, have higher starting torque and higher speed than AC induction motors. The series motor operates well above the synchronous speed of a conventional AC motor. AC commutator motors may be either single-phase or poly-phase. The single-phase AC version suffers a double line frequency torque pulsation, not present in poly-phase motor. Since a commutator motor can operate at much higher speed than an induction motor, it can output more power than a similar size induction motor. However commutator motors are not as maintenance free as induction motors, due to brush and commutator wear. Single phase series motor If a DC series motor equipped with a laminated field is connected to AC, the lagging reactance of the field coil will considerably reduce the field current. While such a motor will rotate, operation is marginal. While starting, armature windings connected to commutator segments shorted by the brushes look like shorted transformer turns to the field. This results in considerable arcing and sparking at the brushes as the armature begins to turn. This is less of a problem as speed increases, which shares the arcing and sparking between commutator segments The lagging reactance and arcing brushes are only tolerable in very small uncompensated series AC motors operated at high speed. Series AC motors smaller than hand drills and kitchen mixers may be uncompensated. (Figure below) Uncompensated series AC motor. Compensated series motor The arcing and sparking is mitigated by placing a compensating winding the stator in series with the armature positioned so that its magnetomotive force (mmf) cancels out the armature AC mmf. (Figure below) A smaller motor air gap and fewer field turns reduces lagging reactance in series with the armature improving the power factor. All but very small AC commutator motors employ compensating windings. Motors as large as those employed in a kitchen mixer, or larger, use compensated stator windings. Compensated series AC motor. Universal motor It is possible to design small (under 300 watts) universal motors which run from either DC or AC. Very small universal motors may be uncompensated. Larger higher speed universal motors use a compensating winding. A motor will run slower on AC than DC due to the reactance encountered with AC. However, the peaks of the sine waves saturate the magnetic path reducing total flux below the DC value, increasing the speed of the “series” motor. Thus, the offsetting effects result in a nearly constant speed from DC to 60 Hz. Small line operated appliances, such as drills, vacuum cleaners, and mixers, requiring 3000 to 10,000 rpm use universal motors. Though, the development of solid state rectifiers and inexpensive permanent magnets is making the DC permanent magnet motor a viable alternative. Repulsion motor A repulsion motor (Figure below) consists of a field directly connected to the AC line voltage and a pair of shorted brushes offset by 15oto 25o from the field axis. The field induces a current flow into the shorted armature whose magnetic field opposes that of the field coils. Speed can be controlled by rotating the brushes with respect to the field axis. This motor has superior commutation below synchronous speed, inferior commutation above synchronous speed. Low starting current produces high starting torque. Repulsion AC motor. Repulsion start induction motor When an induction motor drives a hard starting load like a compressor, the high starting torque of the repulsion motor may be put to use. The induction motor rotor windings are brought out to commutator segments for starting by a pair of shorted brushes. At near running speed, a centrifugal switch shorts out all commutator segments, giving the effect of a squirrel cage rotor . The brushes may also be lifted to prolong bush life. Starting torque is 300% to 600% of the full speed value as compared to under 200% for a pure induction motor. Summary: AC commutator motors • The single phase series motor is an attempt to build a motor like a DC commutator motor. The resulting motor is only practical in the smallest sizes. • The addition of a compensating winding yields the compensated series motor, overcoming excessive commutator sparking. Most AC commutator motors are this type. At high speed this motor provides more power than a same-size induction motor, but is not maintenance free. • It is possible to produce small appliance motors powered by either AC or DC. This is known as a universal motor. • The AC line is directly connected to the stator of a repulsion motor with the commutator shorted by the brushes. • Retractable shorted brushes may start a wound rotor induction motor. This is known as a repulsion start induction motor.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/13%3A_AC_Motors/13.12%3A_AC_Commutator_Motors.txt
Early in my explorations of electricity, I came across a length of coaxial cable with the label “50 ohms” printed along its outer sheath. (Figure below) Now, coaxial cable is a two-conductor cable made of a single conductor surrounded by a braided wire jacket, with a plastic insulating material separating the two. As such, the outer (braided) conductor completely surrounds the inner (single wire) conductor, the two conductors insulated from each other for the entire length of the cable. This type of cabling is often used to conduct weak (low-amplitude) voltage signals, due to its excellent ability to shield such signals from external interference. Coaxial cable contruction. I was mystified by the “50 ohms” label on this coaxial cable. How could two conductors, insulated from each other by a relatively thick layer of plastic, have 50 ohms of resistance between them? Measuring resistance between the outer and inner conductors with my ohmmeter, I found it to be infinite (open-circuit), just as I would have expected from two insulated conductors. Measuring each of the two conductors’ resistances from one end of the cable to the other indicated nearly zero ohms of resistance: again, exactly what I would have expected from continuous, unbroken lengths of wire. Nowhere was I able to measure 50 Ω of resistance on this cable, regardless of which points I connected my ohmmeter between. What I didn’t understand at the time was the cable’s response to short-duration voltage “pulses” and high-frequency AC signals. Continuous direct current (DC)—such as that used by my ohmmeter to check the cable’s resistance—shows the two conductors to be completely insulated from each other, with nearly infinite resistance between the two. However, due to the effects of capacitance and inductance distributed along the length of the cable, the cable’s response to rapidly-changing voltages is such that it acts as a finiteimpedance, drawing current proportional to an applied voltage. What we would normally dismiss as being just a pair of wires becomes an important circuit element in the presence of transient and high-frequency AC signals, with characteristic properties all its own. When expressing such properties, we refer to the wire pair as a transmission line. This chapter explores transmission line behavior. Many transmission line effects do not appear in significant measure in AC circuits of powerline frequency (50 or 60 Hz), or in continuous DC circuits, and so we haven’t had to concern ourselves with them in our study of electric circuits thus far. However, in circuits involving high frequencies and/or extremely long cable lengths, the effects are very significant. Practical applications of transmission line effects abound in radio-frequency (“RF”) communication circuitry, including computer networks, and in low-frequency circuits subject to voltage transients (“surges”) such as lightning strikes on power lines. 14.02: Circuits and the Speed of Light Suppose we had a simple one-battery, one-lamp circuit controlled by a switch. When the switch is closed, the lamp immediately lights. When the switch is opened, the lamp immediately darkens: (Figure below) Lamp appears to immediately respond to switch. Actually, an incandescent lamp takes a short time for its filament to warm up and emit light after receiving an electric current of sufficient magnitude to power it, so the effect is not instant. However, what I’d like to focus on is the immediacy of the electric current itself, not the response time of the lamp filament. For all practical purposes, the effect of switch action is instant at the lamp’s location. Although electrons move through wires very slowly, the overall effect of electrons pushing against each other happens at the speed of light (approximately 186,000 miles per second!). What would happen, though, if the wires carrying power to the lamp were 186,000 miles long? Since we know the effects of electricity do have a finite speed (albeit very fast), a set of very long wires should introduce a time delay into the circuit, delaying the switch’s action on the lamp: (Figure below) At the speed of light, lamp responds after 1 second. Assuming no warm-up time for the lamp filament, and no resistance along the 372,000 mile length of both wires, the lamp would light up approximately one second after the switch closure. Although the construction and operation of superconducting wires 372,000 miles in length would pose enormous practical problems, it is theoretically possible, and so this “thought experiment” is valid. When the switch is opened again, the lamp will continue to receive power for one second of time after the switch opens, then it will de-energize. One way of envisioning this is to imagine the electrons within a conductor as rail cars in a train: linked together with a small amount of “slack” or “play” in the couplings. When one rail car (electron) begins to move, it pushes on the one ahead of it and pulls on the one behind it, but not before the slack is relieved from the couplings. Thus, motion is transferred from car to car (from electron to electron) at a maximum velocity limited by the coupling slack, resulting in a much faster transfer of motion from the left end of the train (circuit) to the right end than the actual speed of the cars (electrons): (Figure below) Motion is transmitted sucessively from one car to next. Another analogy, perhaps more fitting for the subject of transmission lines, is that of waves in water. Suppose a flat, wall-shaped object is suddenly moved horizontally along the surface of water, so as to produce a wave ahead of it. The wave will travel as water molecules bump into each other, transferring wave motion along the water’s surface far faster than the water molecules themselves are actually traveling: (Figure below) Wave motion in water. Likewise, electron motion “coupling” travels approximately at the speed of light, although the electrons themselves don’t move that quickly. In a very long circuit, this “coupling” speed would become noticeable to a human observer in the form of a short time delay between switch action and lamp action. Review • In an electric circuit, the effects of electron motion travel approximately at the speed of light, although electrons within the conductors do not travel anywhere near that velocity.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/14%3A_Transmission_Lines/14.01%3A_A_50-Ohm_Cable%3F.txt
The Parallel Wires of Infinite Length Suppose, though, that we had a set of parallel wires of infinite length, with no lamp at the end. What would happen when we close the switch? Being that there is no longer a load at the end of the wires, this circuit is open. Would there be no current at all? (Figure below) Driving an infinite transmission line. Despite being able to avoid wire resistance through the use of superconductors in this “thought experiment,” we cannot eliminate capacitance along the wires’ lengths. Any pair of conductors separated by an insulating medium creates capacitance between those conductors: (Figure below) Equivalent circuit showing stray capacitance between conductors. Voltage applied between two conductors creates an electric field between those conductors. Energy is stored in this electric field, and this storage of energy results in an opposition to change in voltage. The reaction of a capacitance against changes in voltage is described by the equation i = C(de/dt), which tells us that current will be drawn proportional to the voltage’s rate of change over time. Thus, when the switch is closed, the capacitance between conductors will react against the sudden voltage increase by charging up and drawing current from the source. According to the equation, an instant rise in applied voltage (as produced by perfect switch closure) gives rise to an infinite charging current. Capacitance and Inductance However, the current drawn by a pair of parallel wires will not be infinite, because there exists series impedance along the wires due to inductance. (Figure below) Remember that current through any conductor develops a magnetic field of proportional magnitude. Energy is stored in this magnetic field, (Figure below) and this storage of energy results in an opposition to change in current. Each wire develops a magnetic field as it carries charging current for the capacitance between the wires, and in so doing drops voltage according to the inductance equation e = L(di/dt). This voltage drop limits the voltage rate-of-change across the distributed capacitance, preventing the current from ever reaching an infinite magnitude: Equivalent circuit showing stray capacitance and inductance. Voltage charges capacitance, current charges inductance. Because the electrons in the two wires transfer motion to and from each other at nearly the speed of light, the “wave front” of voltage and current change will propagate down the length of the wires at that same velocity, resulting in the distributed capacitance and inductance progressively charging to full voltage and current, respectively, like this: (Figures below, below, below, below) Uncharged transmission line. Begin wave propagation. Continue wave propagation. Propagate at speed of light. The Transmission Line The end result of these interactions is a constant current of limited magnitude through the battery source. Since the wires are infinitely long, their distributed capacitance will never fully charge to the source voltage, and their distributed inductance will never allow unlimited charging current. In other words, this pair of wires will draw current from the source so long as the switch is closed, behaving as a constant load. No longer are the wires merely conductors of electrical current and carriers of voltage, but now constitute a circuit component in themselves, with unique characteristics. No longer are the two wires merely a pair of conductors, but rather a transmission line. As a constant load, the transmission line’s response to applied voltage is resistive rather than reactive, despite being comprised purely of inductance and capacitance (assuming superconducting wires with zero resistance). We can say this because there is no difference from the battery’s perspective between a resistor eternally dissipating energy and an infinite transmission line eternally absorbing energy. The impedance (resistance) of this line in ohms is called the characteristic impedance, and it is fixed by the geometry of the two conductors. For a parallel-wire line with air insulation, the characteristic impedance may be calculated as such: If the transmission line is coaxial in construction, the characteristic impedance follows a different equation: In both equations, identical units of measurement must be used in both terms of the fraction. If the insulating material is other than air (or a vacuum), both the characteristic impedance and the propagation velocity will be affected. The ratio of a transmission line’s true propagation velocity and the speed of light in a vacuum is called the velocity factor of that line. Velocity factor is purely a factor of the insulating material’s relative permittivity (otherwise known as its dielectric constant), defined as the ratio of a material’s electric field permittivity to that of a pure vacuum. The velocity factor of any cable type—coaxial or otherwise—may be calculated quite simply by the following formula: The Natural Impedance Characteristic impedance is also known as natural impedance, and it refers to the equivalent resistance of a transmission line if it were infinitely long, owing to distributed capacitance and inductance as the voltage and current “waves” propagate along its length at a propagation velocity equal to some large fraction of light speed. It can be seen in either of the first two equations that a transmission line’s characteristic impedance (Z0) increases as the conductor spacing increases. If the conductors are moved away from each other, the distributed capacitance will decrease (greater spacing between capacitor “plates”), and the distributed inductance will increase (less cancellation of the two opposing magnetic fields). Less parallel capacitance and more series inductance results in a smaller current drawn by the line for any given amount of applied voltage, which by definition is a greater impedance. Conversely, bringing the two conductors closer together increases the parallel capacitance and decreases the series inductance. Both changes result in a larger current drawn for a given applied voltage, equating to a lesser impedance. Barring any dissipative effects such as dielectric “leakage” and conductor resistance, the characteristic impedance of a transmission line is equal to the square root of the ratio of the line’s inductance per unit length divided by the line’s capacitance per unit length: Review • A transmission line is a pair of parallel conductors exhibiting certain characteristics due to distributed capacitance and inductance along its length. • When a voltage is suddenly applied to one end of a transmission line, both a voltage “wave” and a current “wave” propagate along the line at nearly light speed. • If a DC voltage is applied to one end of an infinitely long transmission line, the line will draw current from the DC source as though it were a constant resistance. • The characteristic impedance (Z0) of a transmission line is the resistance it would exhibit if it were infinite in length. This is entirely different from leakage resistance of the dielectric separating the two conductors, and the metallic resistance of the wires themselves. Characteristic impedance is purely a function of the capacitance and inductance distributed along the line’s length, and would exist even if the dielectric were perfect (infinite parallel resistance) and the wires superconducting (zero series resistance). • Velocity factor is a fractional value relating a transmission line’s propagation speed to the speed of light in a vacuum. Values range between 0.66 and 0.80 for typical two-wire lines and coaxial cables. For any cable type, it is equal to the reciprocal (1/x) of the square root of the relative permittivity of the cable’s insulation.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/14%3A_Transmission_Lines/14.03%3A_Characteristic_Impedance.txt
A transmission line of infinite length is an interesting abstraction, but physically impossible. All transmission lines have some finite length, and as such do not behave precisely the same as an infinite line. If that piece of 50 Ω “RG-58/U” cable I measured with an ohmmeter years ago had been infinitely long, I actually would have been able to measure 50 Ω worth of resistance between the inner and outer conductors. But it was not infinite in length, and so it measured as “open” (infinite resistance). Nonetheless, the characteristic impedance rating of a transmission line is important even when dealing with limited lengths. An older term for characteristic impedance, which I like for its descriptive value, is surge impedance. If a transient voltage (a “surge”) is applied to the end of a transmission line, the line will draw a current proportional to the surge voltage magnitude divided by the line’s surge impedance (I=E/Z). This simple, Ohm’s Law relationship between current and voltage will hold true for a limited period of time, but not indefinitely. If the end of a transmission line is open-circuited—that is, left unconnected—the current “wave” propagating down the line’s length will have to stop at the end, since electrons cannot flow where there is no continuing path. This abrupt cessation of current at the line’s end causes a “pile-up” to occur along the length of the transmission line, as the electrons successively find no place to go. Imagine a train traveling down the track with slack between the rail car couplings: if the lead car suddenly crashes into an immovable barricade, it will come to a stop, causing the one behind it to come to a stop as soon as the first coupling slack is taken up, which causes the next rail car to stop as soon as the next coupling’s slack is taken up, and so on until the last rail car stops. The train does not come to a halt together, but rather in sequence from first car to last: (Figure below) Reflected wave. A signal propagating from the source-end of a transmission line to the load-end is called an incident wave. The propagation of a signal from load-end to source-end (such as what happened in this example with current encountering the end of an open-circuited transmission line) is called a reflected wave. When this electron “pile-up” propagates back to the battery, current at the battery ceases, and the line acts as a simple open circuit. All this happens very quickly for transmission lines of reasonable length, and so an ohmmeter measurement of the line never reveals the brief time period where the line actually behaves as a resistor. For a mile-long cable with a velocity factor of 0.66 (signal propagation velocity is 66% of light speed, or 122,760 miles per second), it takes only 1/122,760 of a second (8.146 microseconds) for a signal to travel from one end to the other. For the current signal to reach the line’s end and “reflect” back to the source, the round-trip time is twice this figure, or 16.292 µs. High-speed measurement instruments are able to detect this transit time from source to line-end and back to source again, and may be used for the purpose of determining a cable’s length. This technique may also be used for determining the presence and location of a break in one or both of the cable’s conductors, since a current will “reflect” off the wire break just as it will off the end of an open-circuited cable. Instruments designed for such purposes are called time-domain reflectometers (TDRs). The basic principle is identical to that of sonar range-finding: generating a sound pulse and measuring the time it takes for the echo to return. A similar phenomenon takes place if the end of a transmission line is short-circuited: when the voltage wave-front reaches the end of the line, it is reflected back to the source, because voltage cannot exist between two electrically common points. When this reflected wave reaches the source, the source sees the entire transmission line as a short-circuit. Again, this happens as quickly as the signal can propagate round-trip down and up the transmission line at whatever velocity allowed by the dielectric material between the line’s conductors. A simple experiment illustrates the phenomenon of wave reflection in transmission lines. Take a length of rope by one end and “whip” it with a rapid up-and-down motion of the wrist. A wave may be seen traveling down the rope’s length until it dissipates entirely due to friction: (Figure below) Lossy transmission line. This is analogous to a long transmission line with internal loss: the signal steadily grows weaker as it propagates down the line’s length, never reflecting back to the source. However, if the far end of the rope is secured to a solid object at a point prior to the incident wave’s total dissipation, a second wave will be reflected back to your hand: (Figure below) Reflected wave. Usually, the purpose of a transmission line is to convey electrical energy from one point to another. Even if the signals are intended for information only, and not to power some significant load device, the ideal situation would be for all of the original signal energy to travel from the source to the load, and then be completely absorbed or dissipated by the load for maximum signal-to-noise ratio. Thus, “loss” along the length of a transmission line is undesirable, as are reflected waves, since reflected energy is energy not delivered to the end device. Reflections may be eliminated from the transmission line if the load’s impedance exactly equals the characteristic (“surge”) impedance of the line. For example, a 50 Ω coaxial cable that is either open-circuited or short-circuited will reflect all of the incident energy back to the source. However, if a 50 Ω resistor is connected at the end of the cable, there will be no reflected energy, all signal energy being dissipated by the resistor. This makes perfect sense if we return to our hypothetical, infinite-length transmission line example. A transmission line of 50 Ω characteristic impedance and infinite length behaves exactly like a 50 Ω resistance as measured from one end. (Figure below) If we cut this line to some finite length, it will behave as a 50 Ω resistor to a constant source of DC voltage for a brief time, but then behave like an open- or a short-circuit, depending on what condition we leave the cut end of the line: open (Figure below) or shorted. (Figure below) However, if we terminate the line with a 50 Ω resistor, the line will once again behave as a 50 Ω resistor, indefinitely: the same as if it were of infinite length again: (Figure below) Infinite transmission line looks like resistor. One mile transmission. Shorted transmission line. Line terminated in characteristic impedance. In essence, a terminating resistor matching the natural impedance of the transmission line makes the line “appear” infinitely long from the perspective of the source, because a resistor has the ability to eternally dissipate energy in the same way a transmission line of infinite length is able to eternally absorb energy. Reflected waves will also manifest if the terminating resistance isn’t precisely equal to the characteristic impedance of the transmission line, not just if the line is left unconnected (open) or jumpered (shorted). Though the energy reflection will not be total with a terminating impedance of slight mismatch, it will be partial. This happens whether or not the terminating resistance is greater or less than the line’s characteristic impedance. Re-reflections of a reflected wave may also occur at the source end of a transmission line, if the source’s internal impedance (Thevenin equivalent impedance) is not exactly equal to the line’s characteristic impedance. A reflected wave returning back to the source will be dissipated entirely if the source impedance matches the line’s, but will be reflected back toward the line end like another incident wave, at least partially, if the source impedance does not match the line. This type of reflection may be particularly troublesome, as it makes it appear that the source has transmitted another pulse. Review • Characteristic impedance is also known as surge impedance, due to the temporarily resistive behavior of any length transmission line. • A finite-length transmission line will appear to a DC voltage source as a constant resistance for some short time, then as whatever impedance the line is terminated with. Therefore, an open-ended cable simply reads “open” when measured with an ohmmeter, and “shorted” when its end is short-circuited. • A transient (“surge”) signal applied to one end of an open-ended or short-circuited transmission line will “reflect” off the far end of the line as a secondary wave. A signal traveling on a transmission line from source to load is called an incident wave; a signal “bounced” off the end of a transmission line, traveling from load to source, is called a reflected wave. • Reflected waves will also appear in transmission lines terminated by resistors not precisely matching the characteristic impedance. • A finite-length transmission line may be made to appear infinite in length if terminated by a resistor of equal value to the line’s characteristic impedance. This eliminates all signal reflections. • A reflected wave may become re-reflected off the source-end of a transmission line if the source’s internal impedance does not match the line’s characteristic impedance. This re-reflected wave will appear, of course, like another pulse signal transmitted from the source.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/14%3A_Transmission_Lines/14.04%3A_Finite-length_Transmission_Lines.txt
In DC and low-frequency AC circuits, the characteristic impedance of parallel wires is usually ignored. This includes the use of coaxial cables in instrument circuits, often employed to protect weak voltage signals from being corrupted by induced “noise” caused by stray electric and magnetic fields. This is due to the relatively short timespans in which reflections take place in the line, as compared to the period of the waveforms or pulses of the significant signals in the circuit. As we saw in the last section, if a transmission line is connected to a DC voltage source, it will behave as a resistor equal in value to the line’s characteristic impedance only for as long as it takes the incident pulse to reach the end of the line and return as a reflected pulse, back to the source. After that time (a brief 16.292 µs for the mile-long coaxial cable of the last example), the source “sees” only the terminating impedance, whatever that may be. If the circuit in question handles low-frequency AC power, such short time delays introduced by a transmission line between when the AC source outputs a voltage peak and when the source “sees” that peak loaded by the terminating impedance (round-trip time for the incident wave to reach the line’s end and reflect back to the source) are of little consequence. Even though we know that signal magnitudes along the line’s length are not equal at any given time due to signal propagation at (nearly) the speed of light, the actual phase difference between start-of-line and end-of-line signals is negligible, because line-length propagations occur within a very small fraction of the AC waveform’s period. For all practical purposes, we can say that voltage along all respective points on a low-frequency, two-conductor line are equal and in-phase with each other at any given point in time. In these cases, we can say that the transmission lines in question are electrically short, because their propagation effects are much quicker than the periods of the conducted signals. By contrast, an electrically long line is one where the propagation time is a large fraction or even a multiple of the signal period. A “long” line is generally considered to be one where the source’s signal waveform completes at least a quarter-cycle (90o of “rotation”) before the incident signal reaches line’s end. Up until this chapter in the Lessons In Electric Circuits book series, all connecting lines were assumed to be electrically short. To put this into perspective, we need to express the distance traveled by a voltage or current signal along a transmission line in relation to its source frequency. An AC waveform with a frequency of 60 Hz completes one cycle in 16.66 ms. At light speed (186,000 mile/s), this equates to a distance of 3100 miles that a voltage or current signal will propagate in that time. If the velocity factor of the transmission line is less than 1, the propagation velocity will be less than 186,000 miles per second, and the distance less by the same factor. But even if we used the coaxial cable’s velocity factor from the last example (0.66), the distance is still a very long 2046 miles! Whatever distance we calculate for a given frequency is called the wavelengthof the signal. A simple formula for calculating wavelength is as follows: The lower-case Greek letter “lambda” (λ) represents wavelength, in whatever unit of length used in the velocity figure (if miles per second, then wavelength in miles; if meters per second, then wavelength in meters). Velocity of propagation is usually the speed of light when calculating signal wavelength in open air or in a vacuum, but will be less if the transmission line has a velocity factor less than 1. If a “long” line is considered to be one at least 1/4 wavelength in length, you can see why all connecting lines in the circuits discussed thusfar have been assumed “short.” For a 60 Hz AC power system, power lines would have to exceed 775 miles in length before the effects of propagation time became significant. Cables connecting an audio amplifier to speakers would have to be over 4.65 miles in length before line reflections would significantly impact a 10 kHz audio signal! When dealing with radio-frequency systems, though, transmission line length is far from trivial. Consider a 100 MHz radio signal: its wavelength is a mere 9.8202 feet, even at the full propagation velocity of light (186,000 mile/s). A transmission line carrying this signal would not have to be more than about 2-1/2 feet in length to be considered “long!” With a cable velocity factor of 0.66, this critical length shrinks to 1.62 feet. When an electrical source is connected to a load via a “short” transmission line, the load’s impedance dominates the circuit. This is to say, when the line is short, its own characteristic impedance is of little consequence to the circuit’s behavior. We see this when testing a coaxial cable with an ohmmeter: the cable reads “open” from center conductor to outer conductor if the cable end is left unterminated. Though the line acts as a resistor for a very brief period of time after the meter is connected (about 50 Ω for an RG-58/U cable), it immediately thereafter behaves as a simple “open circuit:” the impedance of the line’s open end. Since the combined response time of an ohmmeter and the human being using it greatly exceeds the round-trip propagation time up and down the cable, it is “electrically short” for this application, and we only register the terminating (load) impedance. It is the extreme speed of the propagated signal that makes us unable to detect the cable’s 50 Ω transient impedance with an ohmmeter. If we use a coaxial cable to conduct a DC voltage or current to a load, and no component in the circuit is capable of measuring or responding quickly enough to “notice” a reflected wave, the cable is considered “electrically short” and its impedance is irrelevant to circuit function. Note how the electrical “shortness” of a cable is relative to the application: in a DC circuit where voltage and current values change slowly, nearly any physical length of cable would be considered “short” from the standpoint of characteristic impedance and reflected waves. Taking the same length of cable, though, and using it to conduct a high-frequency AC signal could result in a vastly different assessment of that cable’s “shortness!” When a source is connected to a load via a “long” transmission line, the line’s own characteristic impedance dominates over load impedance in determining circuit behavior. In other words, an electrically “long” line acts as the principal component in the circuit, its own characteristics overshadowing the load’s. With a source connected to one end of the cable and a load to the other, current drawn from the source is a function primarily of the line and not the load. This is increasingly true the longer the transmission line is. Consider our hypothetical 50 Ω cable of infinite length, surely the ultimate example of a “long” transmission line: no matter what kind of load we connect to one end of this line, the source (connected to the other end) will only see 50 Ω of impedance, because the line’s infinite length prevents the signal from ever reaching the end where the load is connected. In this scenario, line impedance exclusively defines circuit behavior, rendering the load completely irrelevant. The most effective way to minimize the impact of transmission line length on circuit behavior is to match the line’s characteristic impedance to the load impedance. If the load impedance is equal to the line impedance, then any signal source connected to the other end of the line will “see” the exact same impedance, and will have the exact same amount of current drawn from it, regardless of line length. In this condition of perfect impedance matching, line length only affects the amount of time delay from signal departure at the source to signal arrival at the load. However, perfect matching of line and load impedances is not always practical or possible. The next section discusses the effects of “long” transmission lines, especially when line length happens to match specific fractions or multiples of signal wavelength. Review • Coaxial cabling is sometimes used in DC and low-frequency AC circuits as well as in high-frequency circuits, for the excellent immunity to induced “noise” that it provides for signals. • When the period of a transmitted voltage or current signal greatly exceeds the propagation time for a transmission line, the line is considered electrically short. Conversely, when the propagation time is a large fraction or multiple of the signal’s period, the line is considered electrically long. • A signal’s wavelength is the physical distance it will propagate in the timespan of one period. Wavelength is calculated by the formula λ=v/f, where “λ” is the wavelength, “v” is the propagation velocity, and “f” is the signal frequency. • A rule-of-thumb for transmission line “shortness” is that the line must be at least 1/4 wavelength before it is considered “long.” • In a circuit with a “short” line, the terminating (load) impedance dominates circuit behavior. The source effectively sees nothing but the load’s impedance, barring any resistive losses in the transmission line. • In a circuit with a “long” line, the line’s own characteristic impedance dominates circuit behavior. The ultimate example of this is a transmission line of infinite length: since the signal will never reach the load impedance, the source only “sees” the cable’s characteristic impedance. • When a transmission line is terminated by a load precisely matching its impedance, there are no reflected waves and thus no problems with line length.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/14%3A_Transmission_Lines/14.05%3A_%E2%80%9CLong%E2%80%99%E2%80%99_and_%E2%80%9CShort%E2%80%99%E2%80%99_Transmission_Lines.txt
Whenever there is a mismatch of impedance between transmission line and load, reflections will occur. If the incident signal is a continuous AC waveform, these reflections will mix with more of the oncoming incident waveform to produce stationary waveforms called standing waves. The following illustration shows how a triangle-shaped incident waveform turns into a mirror-image reflection upon reaching the line’s unterminated end. The transmission line in this illustrative sequence is shown as a single, thick line rather than a pair of wires, for simplicity’s sake. The incident wave is shown traveling from left to right, while the reflected wave travels from right to left: (Figure below) Incident wave reflects off end of unterminated transmission line. If we add the two waveforms together, we find that a third, stationary waveform is created along the line’s length: (Figure below) The sum of the incident and reflected waves is a stationary wave. This third, “standing” wave, in fact, represents the only voltage along the line, being the representative sum of incident and reflected voltage waves. It oscillates in instantaneous magnitude, but does not propagate down the cable’s length like the incident or reflected waveforms causing it. Note the dots along the line length marking the “zero” points of the standing wave (where the incident and reflected waves cancel each other), and how those points never change position: (Figure below) The standing wave does not propgate along the transmission line. Standing waves are quite abundant in the physical world. Consider a string or rope, shaken at one end, and tied down at the other (only one half-cycle of hand motion shown, moving downward): (Figure below) Standing waves on a rope. Both the nodes (points of little or no vibration) and the antinodes (points of maximum vibration) remain fixed along the length of the string or rope. The effect is most pronounced when the free end is shaken at just the right frequency. Plucked strings exhibit the same “standing wave” behavior, with “nodes” of maximum and minimum vibration along their length. The major difference between a plucked string and a shaken string is that the plucked string supplies its own “correct” frequency of vibration to maximize the standing-wave effect: (Figure below) Standing waves on a plucked string. Wind blowing across an open-ended tube also produces standing waves; this time, the waves are vibrations of air molecules (sound) within the tube rather than vibrations of a solid object. Whether the standing wave terminates in a node (minimum amplitude) or an antinode (maximum amplitude) depends on whether the other end of the tube is open or closed: (Figure below) Standing sound waves in open ended tubes. A closed tube end must be a wave node, while an open tube end must be an antinode. By analogy, the anchored end of a vibrating string must be a node, while the free end (if there is any) must be an antinode. Note how there is more than one wavelength suitable for producing standing waves of vibrating air within a tube that precisely match the tube’s end points. This is true for all standing-wave systems: standing waves will resonate with the system for any frequency (wavelength) correlating to the node/antinode points of the system. Another way of saying this is that there are multiple resonant frequencies for any system supporting standing waves. All higher frequencies are integer-multiples of the lowest (fundamental) frequency for the system. The sequential progression of harmonics from one resonant frequency to the next defines the overtonefrequencies for the system: (Figure below) Harmonics (overtones) in open ended pipes The actual frequencies (measured in Hertz) for any of these harmonics or overtones depends on the physical length of the tube and the waves’ propagation velocity, which is the speed of sound in air. Because transmission lines support standing waves, and force these waves to possess nodes and antinodes according to the type of termination impedance at the load end, they also exhibit resonance at frequencies determined by physical length and propagation velocity. Transmission line resonance, though, is a bit more complex than resonance of strings or of air in tubes, because we must consider both voltage waves and current waves. This complexity is made easier to understand by way of computer simulation. To begin, let’s examine a perfectly matched source, transmission line, and load. All components have an impedance of 75 Ω: (Figure below) Perfectly matched transmission line. Using SPICE to simulate the circuit, we’ll specify the transmission line (t1) with a 75 Ω characteristic impedance (z0=75) and a propagation delay of 1 microsecond (td=1u). This is a convenient method for expressing the physical length of a transmission line: the amount of time it takes a wave to propagate down its entire length. If this were a real 75 Ω cable—perhaps a type “RG-59B/U” coaxial cable, the type commonly used for cable television distribution—with a velocity factor of 0.66, it would be about 648 feet long. Since 1 µs is the period of a 1 MHz signal, I’ll choose to sweep the frequency of the AC source from (nearly) zero to that figure, to see how the system reacts when exposed to signals ranging from DC to 1 wavelength. Here is the SPICE netlist for the circuit shown above: Running this simulation and plotting the source impedance drop (as an indication of current), the source voltage, the line’s source-end voltage, and the load voltage, we see that the source voltage—shown as vm(1) (voltage magnitude between node 1 and the implied ground point of node 0) on the graphic plot—registers a steady 1 volt, while every other voltage registers a steady 0.5 volts: (Figure below) No resonances on a matched transmission line. In a system where all impedances are perfectly matched, there can be no standing waves, and therefore no resonant “peaks” or “valleys” in the Bode plot. Now, let’s change the load impedance to 999 MΩ, to simulate an open-ended transmission line. (Figure below) We should definitely see some reflections on the line now as the frequency is swept from 1 mHz to 1 MHz: (Figure below) Open ended transmission line. Resonances on open transmission line. Here, both the supply voltage vm(1) and the line’s load-end voltage vm(3) remain steady at 1 volt. The other voltages dip and peak at different frequencies along the sweep range of 1 mHz to 1 MHz. There are five points of interest along the horizontal axis of the analysis: 0 Hz, 250 kHz, 500 kHz, 750 kHz, and 1 MHz. We will investigate each one with regard to voltage and current at different points of the circuit. At 0 Hz (actually 1 mHz), the signal is practically DC, and the circuit behaves much as it would given a 1-volt DC battery source. There is no circuit current, as indicated by zero voltage drop across the source impedance (Zsource: vm(1,2)), and full source voltage present at the source-end of the transmission line (voltage measured between node 2 and node 0: vm(2)). (Figure below) At f=0: input: V=1, I=0; end: V=1, I=0. At 250 kHz, we see zero voltage and maximum current at the source-end of the transmission line, yet still full voltage at the load-end: (Figure below) At f=250 KHz: input: V=0, I=13.33 mA; end: V=1 I=0. You might be wondering, how can this be? How can we get full source voltage at the line’s open end while there is zero voltage at its entrance? The answer is found in the paradox of the standing wave. With a source frequency of 250 kHz, the line’s length is precisely right for 1/4 wavelength to fit from end to end. With the line’s load end open-circuited, there can be no current, but there will be voltage. Therefore, the load-end of an open-circuited transmission line is a current node (zero point) and a voltage antinode (maximum amplitude): (Figure below) Open end of transmission line shows current node, voltage antinode at open end. At 500 kHz, exactly one-half of a standing wave rests on the transmission line, and here we see another point in the analysis where the source current drops off to nothing and the source-end voltage of the transmission line rises again to full voltage: (Figure below) Full standing wave on half wave open transmission line. At 750 kHz, the plot looks a lot like it was at 250 kHz: zero source-end voltage (vm(2)) and maximum current (vm(1,2)). This is due to 3/4 of a wave poised along the transmission line, resulting in the source “seeing” a short-circuit where it connects to the transmission line, even though the other end of the line is open-circuited: (Figure below) 1 1/2 standing waves on 3/4 wave open transmission line. When the supply frequency sweeps up to 1 MHz, a full standing wave exists on the transmission line. At this point, the source-end of the line experiences the same voltage and current amplitudes as the load-end: full voltage and zero current. In essence, the source “sees” an open circuit at the point where it connects to the transmission line. (Figure below) Double standing waves on full wave open transmission line. In a similar fashion, a short-circuited transmission line generates standing waves, although the node and antinode assignments for voltage and current are reversed: at the shorted end of the line, there will be zero voltage (node) and maximum current (antinode). What follows is the SPICE simulation (circuit Figure below and illustrations of what happens (Figure 2nd-below at resonances) at all the interesting frequencies: 0 Hz (Figure below) , 250 kHz (Figure below), 500 kHz (Figure below), 750 kHz (Figure below), and 1 MHz (Figure below). The short-circuit jumper is simulated by a 1 µΩ load impedance: (Figure below) Shorted transmission line. Resonances on shorted transmission line At f=0 Hz: input: V=0, I=13.33 mA; end: V=0, I=13.33 mA. Half wave standing wave pattern on 1/4 wave shorted transmission line. Full wave standing wave pattern on half wave shorted transmission line. 1 1/2 standing wavepattern on 3/4 wave shorted transmission line. Double standing waves on full wave shorted transmission line. In both these circuit examples, an open-circuited line and a short-circuited line, the energy reflection is total: 100% of the incident wave reaching the line’s end gets reflected back toward the source. If, however, the transmission line is terminated in some impedance other than an open or a short, the reflections will be less intense, as will be the difference between minimum and maximum values of voltage and current along the line. Suppose we were to terminate our example line with a 100 Ω resistor instead of a 75 Ω resistor. (Figure below) Examine the results of the corresponding SPICE analysis to see the effects of impedance mismatch at different source frequencies: (Figure below) Transmission line terminated in a mismatch Weak resonances on a mismatched transmission line If we run another SPICE analysis, this time printing numerical results rather than plotting them, we can discover exactly what is happening at all the interesting frequencies: (DC, Figure below; 250 kHz, Figure below; 500 kHz, Figure below; 750 kHz, Figure below; and 1 MHz, Figure below). At all frequencies, the source voltage, v(1), remains steady at 1 volt, as it should. The load voltage, v(3), also remains steady, but at a lesser voltage: 0.5714 volts. However, both the line input voltage (v(2)) and the voltage dropped across the source’s 75 Ω impedance (v(1,2), indicating current drawn from the source) vary with frequency. At f=0 Hz: input: V=0.57.14, I=5.715 mA; end: V=0.5714, I=5.715 mA. At f=250 KHz: input: V=0.4286, I=7.619 mA; end: V=0.5714, I=7.619 mA. At f=500 KHz: input: V=0.5714, I=5.715 mA; end: V=5.714, I=5.715 mA. At f=750 KHz: input: V=0.4286, I=7.619 mA; end: V=0.5714, I=7.619 mA. At f=1 MHz: input: V=0.5714, I=5.715 mA; end: V=0.5714, I=0.5715 mA. At odd harmonics of the fundamental frequency (250 kHz, Figure 3rd-above and 750 kHz, Figure above) we see differing levels of voltage at each end of the transmission line, because at those frequencies the standing waves terminate at one end in a node and at the other end in an antinode. Unlike the open-circuited and short-circuited transmission line examples, the maximum and minimum voltage levels along this transmission line do not reach the same extreme values of 0% and 100% source voltage, but we still have points of “minimum” and “maximum” voltage. (Figure 6th-above) The same holds true for current: if the line’s terminating impedance is mismatched to the line’s characteristic impedance, we will have points of minimum and maximum current at certain fixed locations on the line, corresponding to the standing current wave’s nodes and antinodes, respectively. One way of expressing the severity of standing waves is as a ratio of maximum amplitude (antinode) to minimum amplitude (node), for voltage or for current. When a line is terminated by an open or a short, this standing wave ratio, or SWR is valued at infinity, since the minimum amplitude will be zero, and any finite value divided by zero results in an infinite (actually, “undefined”) quotient. In this example, with a 75 Ω line terminated by a 100 Ω impedance, the SWR will be finite: 1.333, calculated by taking the maximum line voltage at either 250 kHz or 750 kHz (0.5714 volts) and dividing by the minimum line voltage (0.4286 volts). Standing wave ratio may also be calculated by taking the line’s terminating impedance and the line’s characteristic impedance, and dividing the larger of the two values by the smaller. In this example, the terminating impedance of 100 Ω divided by the characteristic impedance of 75 Ω yields a quotient of exactly 1.333, matching the previous calculation very closely. A perfectly terminated transmission line will have an SWR of 1, since voltage at any location along the line’s length will be the same, and likewise for current. Again, this is usually considered ideal, not only because reflected waves constitute energy not delivered to the load, but because the high values of voltage and current created by the antinodes of standing waves may over-stress the transmission line’s insulation (high voltage) and conductors (high current), respectively. Also, a transmission line with a high SWR tends to act as an antenna, radiating electromagnetic energy away from the line, rather than channeling all of it to the load. This is usually undesirable, as the radiated energy may “couple” with nearby conductors, producing signal interference. An interesting footnote to this point is that antenna structures—which typically resemble open- or short-circuited transmission lines—are often designed to operate at high standing wave ratios, for the very reason of maximizing signal radiation and reception. The following photograph (Figure below) shows a set of transmission lines at a junction point in a radio transmitter system. The large, copper tubes with ceramic insulator caps at the ends are rigid coaxial transmission lines of 50 Ω characteristic impedance. These lines carry RF power from the radio transmitter circuit to a small, wooden shelter at the base of an antenna structure, and from that shelter on to other shelters with other antenna structures: Flexible coaxial cables connected to rigid lines. Flexible coaxial cable connected to the rigid lines (also of 50 Ω characteristic impedance) conduct the RF power to capacitive and inductive “phasing” networks inside the shelter. The white, plastic tube joining two of the rigid lines together carries “filling” gas from one sealed line to the other. The lines are gas-filled to avoid collecting moisture inside them, which would be a definite problem for a coaxial line. Note the flat, copper “straps” used as jumper wires to connect the conductors of the flexible coaxial cables to the conductors of the rigid lines. Why flat straps of copper and not round wires? Because of the skin effect, which renders most of the cross-sectional area of a round conductor useless at radio frequencies. Like many transmission lines, these are operated at low SWR conditions. As we will see in the next section, though, the phenomenon of standing waves in transmission lines is not always undesirable, as it may be exploited to perform a useful function: impedance transformation. Review • Standing waves are waves of voltage and current which do not propagate (i.e. they are stationary), but are the result of interference between incident and reflected waves along a transmission line. • A node is a point on a standing wave of minimum amplitude. • An antinode is a point on a standing wave of maximum amplitude. • Standing waves can only exist in a transmission line when the terminating impedance does not match the line’s characteristic impedance. In a perfectly terminated line, there are no reflected waves, and therefore no standing waves at all. • At certain frequencies, the nodes and antinodes of standing waves will correlate with the ends of a transmission line, resulting in resonance. • The lowest-frequency resonant point on a transmission line is where the line is one quarter-wavelength long. Resonant points exist at every harmonic (integer-multiple) frequency of the fundamental (quarter-wavelength). • Standing wave ratio, or SWR, is the ratio of maximum standing wave amplitude to minimum standing wave amplitude. It may also be calculated by dividing termination impedance by characteristic impedance, or vice versa, which ever yields the greatest quotient. A line with no standing waves (perfectly matched: Zload to Z0) has an SWR equal to 1. • Transmission lines may be damaged by the high maximum amplitudes of standing waves. Voltage antinodes may break down insulation between conductors, and current antinodes may overheat conductors.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/14%3A_Transmission_Lines/14.06%3A_Standing_Waves_and_Resonance.txt
Standing waves at the resonant frequency points of an open- or short-circuited transmission line produce unusual effects. When the signal frequency is such that exactly 1/2 wave or some multiple thereof matches the line’s length, the source “sees” the load impedance as it is. The following pair of illustrations shows an open-circuited line operating at 1/2 (Figure below) and 1 wavelength (Figure below) frequencies: Source sees open, same as end of half wavelength line. Source sees open, same as end of full wavelength (2x half wavelength line). In either case, the line has voltage antinodes at both ends, and current nodes at both ends. That is to say, there is maximum voltage and minimum current at either end of the line, which corresponds to the condition of an open circuit. The fact that this condition exists at both ends of the line tells us that the line faithfully reproduces its terminating impedance at the source end, so that the source “sees” an open circuit where it connects to the transmission line, just as if it were directly open-circuited. The same is true if the transmission line is terminated by a short: at signal frequencies corresponding to 1/2 wavelength (Figure below) or some multiple (Figure below) thereof, the source “sees” a short circuit, with minimum voltage and maximum current present at the connection points between source and transmission line: Source sees short, same as end of half wave length line. Source sees short, same as end of full wavelength line (2x half wavelength). However, if the signal frequency is such that the line resonates at 1/4 wavelength or some multiple thereof, the source will “see” the exact opposite of the termination impedance. That is, if the line is open-circuited, the source will “see” a short-circuit at the point where it connects to the line; and if the line is short-circuited, the source will “see” an open circuit: (Figure below) Line open-circuited; source “sees” a short circuit: at quarter wavelength line (Figure below), at three-quarter wavelength line (Figure below) Source sees short, reflected from open at end of quarter wavelength line. Source sees short, reflected from open at end of three-quarter wavelength line. Line short-circuited; source “sees” an open circuit: at quarter wavelength line (Figure below), at three-quarter wavelength line (Figure below) Source sees open, reflected from short at end of quarter wavelength line. Source sees open, reflected from short at end of three-quarter wavelength line. At these frequencies, the transmission line is actually functioning as an impedance transformer, transforming an infinite impedance into zero impedance, or vice versa. Of course, this only occurs at resonant points resulting in a standing wave of 1/4 cycle (the line’s fundamental, resonant frequency) or some odd multiple (3/4, 5/4, 7/4, 9/4 . . .), but if the signal frequency is known and unchanging, this phenomenon may be used to match otherwise unmatched impedances to each other. Take for instance the example circuit from the last section where a 75 Ω source connects to a 75 Ω transmission line, terminating in a 100 Ω load impedance. From the numerical figures obtained via SPICE, let’s determine what impedance the source “sees” at its end of the transmission line at the line’s resonant frequencies: quarter wavelength (Figure below), halfwave length (Figure below), three-quarter wavelength (Figure below) full wavelength (Figure below) Source sees 56.25 Ω reflected from 100 Ω load at end of quarter wavelength line. Source sees 100 Ω reflected from 100 Ω load at end of half wavelength line. Source sees 56.25 Ω reflected from 100 Ω load at end of three-quarter wavelength line (same as quarter wavelength). Source sees 100 Ω reflected from 100 Ω load at end of full-wavelength line (same as half-wavelength). A simple equation relates line impedance (Z0), load impedance (Zload), and input impedance (Zinput) for an unmatched transmission line operating at an odd harmonic of its fundamental frequency: One practical application of this principle would be to match a 300 Ω load to a 75 Ω signal source at a frequency of 50 MHz. All we need to do is calculate the proper transmission line impedance (Z0), and length so that exactly 1/4 of a wave will “stand” on the line at a frequency of 50 MHz. First, calculating the line impedance: taking the 75 Ω we desire the source to “see” at the source-end of the transmission line, and multiplying by the 300 Ω load resistance, we obtain a figure of 22,500. Taking the square root of 22,500 yields 150 Ω for a characteristic line impedance. Now, to calculate the necessary line length: assuming that our cable has a velocity factor of 0.85, and using a speed-of-light figure of 186,000 miles per second, the velocity of propagation will be 158,100 miles per second. Taking this velocity and dividing by the signal frequency gives us a wavelength of 0.003162 miles, or 16.695 feet. Since we only need one-quarter of this length for the cable to support a quarter-wave, the requisite cable length is 4.1738 feet. Here is a schematic diagram for the circuit, showing node numbers for the SPICE analysis we’re about to run: (Figure below) Quarter wave section of 150 Ω transmission line matches 75 Ω source to 300 Ω load. We can specify the cable length in SPICE in terms of time delay from beginning to end. Since the frequency is 50 MHz, the signal period will be the reciprocal of that, or 20 nano-seconds (20 ns). One-quarter of that time (5 ns) will be the time delay of a transmission line one-quarter wavelength long: At a frequency of 50 MHz, our 1-volt signal source drops half of its voltage across the series 75 Ω impedance (v(1,2)) and the other half of its voltage across the input terminals of the transmission line (v(2)). This means the source “thinks” it is powering a 75 Ω load. The actual load impedance, however, receives a full 1 volt, as indicated by the 1.000 figure at v(3). With 0.5 volt dropped across 75 Ω, the source is dissipating 3.333 mW of power: the same as dissipated by 1 volt across the 300 Ω load, indicating a perfect match of impedance, according to the Maximum Power Transfer Theorem. The 1/4-wavelength, 150 Ω, transmission line segment has successfully matched the 300 Ω load to the 75 Ω source. Bear in mind, of course, that this only works for 50 MHz and its odd-numbered harmonics. For any other signal frequency to receive the same benefit of matched impedances, the 150 Ω line would have to lengthened or shortened accordingly so that it was exactly 1/4 wavelength long. Strangely enough, the exact same line can also match a 75 Ω load to a 300 Ω source, demonstrating how this phenomenon of impedance transformation is fundamentally different in principle from that of a conventional, two-winding transformer: Here, we see the 1-volt source voltage equally split between the 300 Ω source impedance (v(1,2)) and the line’s input (v(2)), indicating that the load “appears” as a 300 Ω impedance from the source’s perspective where it connects to the transmission line. This 0.5 volt drop across the source’s 300 Ω internal impedance yields a power figure of 833.33 µW, the same as the 0.25 volts across the 75 Ω load, as indicated by voltage figure v(3). Once again, the impedance values of source and load have been matched by the transmission line segment. This technique of impedance matching is often used to match the differing impedance values of transmission line and antenna in radio transmitter systems, because the transmitter’s frequency is generally well-known and unchanging. The use of an impedance “transformer” 1/4 wavelength in length provides impedance matching using the shortest conductor length possible. (Figure below) Quarter wave 150 Ω transmission line section matches 75 Ω line to 300 Ω antenna. REVIEW: • A transmission line with standing waves may be used to match different impedance values if operated at the correct frequency(ies). • When operated at a frequency corresponding to a standing wave of 1/4-wavelength along the transmission line, the line’s characteristic impedance necessary for impedance transformation must be equal to the square root of the product of the source’s impedance and the load’s impedance.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/14%3A_Transmission_Lines/14.07%3A_Impedance_Transformation.txt
A waveguide is a special form of transmission line consisting of a hollow, metal tube. The tube wall provides distributed inductance, while the empty space between the tube walls provide distributed capacitance: Figure below Wave guides conduct microwave energy at lower loss than coaxial cables. Waveguides are practical only for signals of extremely high frequency, where the wavelength approaches the cross-sectional dimensions of the waveguide. Below such frequencies, waveguides are useless as electrical transmission lines. When functioning as transmission lines, though, waveguides are considerably simpler than two-conductor cables—especially coaxial cables—in their manufacture and maintenance. With only a single conductor (the waveguide’s “shell”), there are no concerns with proper conductor-to-conductor spacing, or of the consistency of the dielectric material, since the only dielectric in a waveguide is air. Moisture is not as severe a problem in waveguides as it is within coaxial cables, either, and so waveguides are often spared the necessity of gas “filling.” Waveguides may be thought of as conduits for electromagnetic energy, the waveguide itself acting as nothing more than a “director” of the energy rather than as a signal conductor in the normal sense of the word. In a sense, all transmission lines function as conduits of electromagnetic energy when transporting pulses or high-frequency waves, directing the waves as the banks of a river direct a tidal wave. However, because waveguides are single-conductor elements, the propagation of electrical energy down a waveguide is of a very different nature than the propagation of electrical energy down a two-conductor transmission line. All electromagnetic waves consist of electric and magnetic fields propagating in the same direction of travel, but perpendicular to each other. Along the length of a normal transmission line, both electric and magnetic fields are perpendicular (transverse) to the direction of wave travel. This is known as the principal mode, or TEM (Transverse Electric and Magnetic) mode. This mode of wave propagation can exist only where there are two conductors, and it is the dominant mode of wave propagation where the cross-sectional dimensions of the transmission line are small compared to the wavelength of the signal. (Figure below) Twin lead transmission line propagation: TEM mode. At microwave signal frequencies (between 100 MHz and 300 GHz), two-conductor transmission lines of any substantial length operating in standard TEM mode become impractical. Lines small enough in cross-sectional dimension to maintain TEM mode signal propagation for microwave signals tend to have low voltage ratings, and suffer from large, parasitic power losses due to conductor “skin” and dielectric effects. Fortunately, though, at these short wavelengths there exist other modes of propagation that are not as “lossy,” if a conductive tube is used rather than two parallel conductors. It is at these high frequencies that waveguides become practical. When an electromagnetic wave propagates down a hollow tube, only one of the fields—either electric or magnetic—will actually be transverse to the wave’s direction of travel. The other field will “loop” longitudinally to the direction of travel, but still be perpendicular to the other field. Whichever field remains transverse to the direction of travel determines whether the wave propagates in TE mode (Transverse Electric) or TM (Transverse Magnetic) mode. (Figure below) Waveguide (TE) transverse electric and (TM) transverse magnetic modes. Many variations of each mode exist for a given waveguide, and a full discussion of this is subject well beyond the scope of this book. Signals are typically introduced to and extracted from waveguides by means of small antenna-like coupling devices inserted into the waveguide. Sometimes these coupling elements take the form of a dipole, which is nothing more than two open-ended stub wires of appropriate length. Other times, the coupler is a single stub (a half-dipole, similar in principle to a “whip” antenna, 1/4λ in physical length), or a short loop of wire terminated on the inside surface of the waveguide: (Figure below) Stub and loop coupling to waveguide. In some cases, such as a class of vacuum tube devices called inductive output tubes (the so-called klystrontube falls into this category), a “cavity” formed of conductive material may intercept electromagnetic energy from a modulated beam of electrons, having no contact with the beam itself: (Figure below) Klystron inductive output tube. Just as transmission lines are able to function as resonant elements in a circuit, especially when terminated by a short-circuit or an open-circuit, a dead-ended waveguide may also resonate at particular frequencies. When used as such, the device is called a cavity resonator. Inductive output tubes use toroid-shaped cavity resonators to maximize the power transfer efficiency between the electron beam and the output cable. A cavity’s resonant frequency may be altered by changing its physical dimensions. To this end, cavities with movable plates, screws, and other mechanical elements for tuning are manufactured to provide coarse resonant frequency adjustment. If a resonant cavity is made open on one end, it functions as a unidirectional antenna. The following photograph shows a home-made waveguide formed from a tin can, used as an antenna for a 2.4 GHz signal in an “802.11b” computer communication network. The coupling element is a quarter-wave stub: nothing more than a piece of solid copper wire about 1-1/4 inches in length extending from the center of a coaxial cable connector penetrating the side of the can: (Figure below) Can-tenna illustrates stub coupling to waveguide. A few more tin-can antennae may be seen in the background, one of them a “Pringles” potato chip can. Although this can is of cardboard (paper) construction, its metallic inner lining provides the necessary conductivity to function as a waveguide. Some of the cans in the background still have their plastic lids in place. The plastic, being nonconductive, does not interfere with the RF signal, but functions as a physical barrier to prevent rain, snow, dust, and other physical contaminants from entering the waveguide. “Real” waveguide antennae use similar barriers to physically enclose the tube, yet allow electromagnetic energy to pass unimpeded. Review • Waveguides are metal tubes functioning as “conduits” for carrying electromagnetic waves. They are practical only for signals of extremely high frequency, where the signal wavelength approaches the cross-sectional dimensions of the waveguide. • Wave propagation through a waveguide may be classified into two broad categories: TE (Transverse Electric), or TM (Transverse Magnetic), depending on which field (electric or magnetic) is perpendicular (transverse) to the direction of wave travel. Wave travel along a standard, two-conductor transmission line is of the TEM (Transverse Electric and Magnetic) mode, where both fields are oriented perpendicular to the direction of travel. TEM mode is only possible with two conductors and cannot exist in a waveguide. • A dead-ended waveguide serving as a resonant element in a microwave circuit is called a cavity resonator. • A cavity resonator with an open end functions as a unidirectional antenna, sending or receiving RF energy to/from the direction of the open end.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_II_-_Alternating_Current_(Kuphaldt)/14%3A_Transmission_Lines/14.08%3A_Waveguides.txt
First, we have to distinguish the difference between numbers and the symbols we use to represent numbers. A number is a mathematical quantity, usually correlated in electronics to a physical quantity such as voltage, current, or resistance. There are many different types of numbers. Here are just a few types, for example: Different types of numbers find different application in the physical world. Whole numbers work well for counting discrete objects, such as the number of resistors in a circuit. Integers are needed when negative equivalents of whole numbers are required. Irrational numbers are numbers that cannot be exactly expressed as the ratio of two integers, and the ratio of a perfect circle’s circumference to its diameter (π) is a good physical example of this. The non-integer quantities of voltage, current, and resistance that we’re used to dealing with in DC circuits can be expressed as real numbers, in either fractional or decimal form. For AC circuit analysis, however, real numbers fail to capture the dual essence of magnitude and phase angle, and so we turn to the use of complex numbers in either rectangular or polar form. If we are to use numbers to understand processes in the physical world, make scientific predictions, or balance our checkbooks, we must have a way of symbolically denoting them. In other words, we may know how much money we have in our checking account, but to keep record of it we need to have some system worked out to symbolize that quantity on paper, or in some other kind of form for record-keeping and tracking. There are two basic ways we can do this: analog and digital. With analog representation, the quantity is symbolized in a way that is infinitely divisible. With digital representation, the quantity is symbolized in a way that is discretely packaged. You’re probably already familiar with an analog representation of money, and didn’t realize it for what it was. Have you ever seen a fund-raising poster made with a picture of a thermometer on it, where the height of the red column indicated the amount of money collected for the cause? The more money collected, the taller the column of red ink on the poster. This is an example of an analog representation of a number. There is no real limit to how finely divided the height of that column can be made to symbolize the amount of money in the account. Changing the height of that column is something that can be done without changing the essential nature of what it is. Length is a physical quantity that can be divided as small as you would like, with no practical limit. The slide rule is a mechanical device that uses the very same physical quantity—length—to represent numbers, and to help perform arithmetical operations with two or more numbers at a time. It, too, is an analog device. On the other hand, a digital representation of that same monetary figure, written with standard symbols (sometimes called ciphers), looks like this: Unlike the “thermometer” poster with its red column, those symbolic characters above cannot be finely divided: that particular combination of ciphers stand for one quantity and one quantity only. If more money is added to the account (+ \$40.12), different symbols must be used to represent the new balance (\$35,995.50), or at least the same symbols arranged in different patterns. This is an example of digital representation. The counterpart to the slide rule (analog) is also a digital device: the abacus, with beads that are moved back and forth on rods to symbolize numerical quantities: Let’s contrast these two methods of numerical representation: Interpretation of numerical symbols is something we tend to take for granted, because it has been taught to us for many years. However, if you were to try to communicate a quantity of something to a person ignorant of decimal numerals, that person could still understand the simple thermometer chart! The infinitely divisible vs. discrete and precision comparisons are really flip-sides of the same coin. The fact that digital representation is composed of individual, discrete symbols (decimal digits and abacus beads) necessarily means that it will be able to symbolize quantities in precise steps. On the other hand, an analog representation (such as a slide rule’s length) is not composed of individual steps, but rather a continuous range of motion. The ability for a slide rule to characterize a numerical quantity to infinite resolution is a trade-off for imprecision. If a slide rule is bumped, an error will be introduced into the representation of the number that was “entered” into it. However, an abacus must be bumped much harder before its beads are completely dislodged from their places (sufficient to represent a different number). Please don’t misunderstand this difference in precision by thinking that digital representation is necessarily more accurate than analog. Just because a clock is digital doesn’t mean that it will always read time more accurately than an analog clock, it just means that the interpretation of its display is less ambiguous. Divisibility of analog versus digital representation can be further illuminated by talking about the representation of irrational numbers. Numbers such as π are called irrational, because they cannot be exactly expressed as the fraction of integers, or whole numbers. Although you might have learned in the past that the fraction 22/7 can be used for π in calculations, this is just an approximation. The actual number “pi” cannot be exactly expressed by any finite, or limited, number of decimal places. The digits of π go on forever: It is possible, at least theoretically, to set a slide rule (or even a thermometer column) so as to perfectly represent the number π, because analog symbols have no minimum limit to the degree that they can be increased or decreased. If my slide rule shows a figure of 3.141593 instead of 3.141592654, I can bump the slide just a bit more (or less) to get it closer yet. However, with digital representation, such as with an abacus, I would need additional rods (place holders, or digits) to represent π to further degrees of precision. An abacus with 10 rods simply cannot represent any more than 10 digits worth of the number π, no matter how I set the beads. To perfectly represent π, an abacus would have to have an infinite number of beads and rods! The tradeoff, of course, is the practical limitation to adjusting, and reading, analog symbols. Practically speaking, one cannot read a slide rule’s scale to the 10th digit of precision, because the marks on the scale are too coarse and human vision is too limited. An abacus, on the other hand, can be set and read with no interpretational errors at all. Furthermore, analog symbols require some kind of standard by which they can be compared for precise interpretation. Slide rules have markings printed along the length of the slides to translate length into standard quantities. Even the thermometer chart has numerals written along its height to show how much money (in dollars) the red column represents for any given amount of height. Imagine if we all tried to communicate simple numbers to each other by spacing our hands apart varying distances. The number 1 might be signified by holding our hands 1 inch apart, the number 2 with 2 inches, and so on. If someone held their hands 17 inches apart to represent the number 17, would everyone around them be able to immediately and accurately interpret that distance as 17? Probably not. Some would guess short (15 or 16) and some would guess long (18 or 19). Of course, fishermen who brag about their catches don’t mind overestimations in quantity! Perhaps this is why people have generally settled upon digital symbols for representing numbers, especially whole numbers and integers, which find the most application in everyday life. Using the fingers on our hands, we have a ready means of symbolizing integers from 0 to 10. We can make hash marks on paper, wood, or stone to represent the same quantities quite easily: For large numbers, though, the “hash mark” numeration system is too inefficient.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/01%3A_Numeration_Systems/1.01%3A_Numbers_and_Symbols.txt
The Romans devised a system that was a substantial improvement over hash marks, because it used a variety of symbols (or ciphers) to represent increasingly large quantities. The notation for 1 is the capital letter I. The notation for 5 is the capital letter V. Other ciphers possess increasing values: If a cipher is accompanied by another cipher of equal or lesser value to the immediate right of it, with no ciphers greater than that other cipher to the right of that other cipher, that other cipher’s value is added to the total quantity. Thus, VIII symbolizes the number 8, and CLVII symbolizes the number 157. On the other hand, if a cipher is accompanied by another cipher of lesser value to the immediate left, that other cipher’s value is subtracted from the first. Therefore, IV symbolizes the number 4 (V minus I), and CMsymbolizes the number 900 (M minus C). You might have noticed that ending credit sequences for most motion pictures contain a notice for the date of production, in Roman numerals. For the year 1987, it would read: MCMLXXXVII. Let’s break this numeral down into its constituent parts, from left to right: Aren’t you glad we don’t use this system of numeration? Large numbers are very difficult to denote this way, and the left vs. right / subtraction vs. addition of values can be very confusing, too. Another major problem with this system is that there is no provision for representing the number zero or negative numbers, both very important concepts in mathematics. Roman culture, however, was more pragmatic with respect to mathematics than most, choosing only to develop their numeration system as far as it was necessary for use in daily life. We owe one of the most important ideas in numeration to the ancient Babylonians, who were the first (as far as we know) to develop the concept of cipher position, or place value, in representing larger numbers. Instead of inventing new ciphers to represent larger numbers, as the Romans did, they re-used the same ciphers, placing them in different positions from right to left. Our own decimal numeration system uses this concept, with only ten ciphers (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9) used in “weighted” positions to represent very large and very small numbers. Each cipher represents an integer quantity, and each place from right to left in the notation represents a multiplying constant, or weight, for each integer quantity. For example, if we see the decimal notation “1206”, we known that this may be broken down into its constituent weight-products as such: Each cipher is called a digit in the decimal numeration system, and each weight, or place value, is ten times that of the one to the immediate right. So, we have a ones place, a tens place, a hundreds place, a thousands place, and so on, working from right to left. Right about now, you’re probably wondering why I’m laboring to describe the obvious. Who needs to be told how decimal numeration works, after you’ve studied math as advanced as algebra and trigonometry? The reason is to better understand other numeration systems, by first knowing the how’s and why’s of the one you’re already used to. The decimal numeration system uses ten ciphers, and place-weights that are multiples of ten. What if we made a numeration system with the same strategy of weighted places, except with fewer or more ciphers? The binary numeration system is such a system. Instead of ten different cipher symbols, with each weight constant being ten times the one before it, we only have two cipher symbols, and each weight constant is twice as much as the one before it. The two allowable cipher symbols for the binary system of numeration are “1” and “0,” and these ciphers are arranged right-to-left in doubling values of weight. The rightmost place is the ones place, just as with decimal notation. Proceeding to the left, we have the twos place, the foursplace, the eights place, the sixteens place, and so on. For example, the following binary number can be expressed, just like the decimal number 1206, as a sum of each cipher value times its respective weight constant: This can get quite confusing, as I’ve written a number with binary numeration (11010), and then shown its place values and total in standard, decimal numeration form (16 + 8 + 2 = 26). In the above example, we’re mixing two different kinds of numerical notation. To avoid unnecessary confusion, we have to denote which form of numeration we’re using when we write (or type!). Typically, this is done in subscript form, with a “2” for binary and a “10” for decimal, so the binary number 110102 is equal to the decimal number 2610. The subscripts are not mathematical operation symbols like superscripts (exponents) are. All they do is indicate what system of numeration we’re using when we write these symbols for other people to read. If you see “310”, all this means is the number three written using decimal numeration. However, if you see “310”, this means something completely different: three to the tenth power (59,049). As usual, if no subscript is shown, the cipher(s) are assumed to be representing a decimal number. Commonly, the number of cipher types (and therefore, the place-value multiplier) used in a numeration system is called that system’s base. Binary is referred to as “base two” numeration, and decimal as “base ten.” Additionally, we refer to each cipher position in binary as a bit rather than the familiar word digit used in the decimal system. Now, why would anyone use binary numeration? The decimal system, with its ten ciphers, makes a lot of sense, being that we have ten fingers on which to count between our two hands. (It is interesting that some ancient central American cultures used numeration systems with a base of twenty. Presumably, they used both fingers and toes to count!!). But the primary reason that the binary numeration system is used in modern electronic computers is because of the ease of representing two cipher states (0 and 1) electronically. With relatively simple circuitry, we can perform mathematical operations on binary numbers by representing each bit of the numbers by a circuit which is either on (current) or off (no current). Just like the abacus with each rod representing another decimal digit, we simply add more circuits to give us more bits to symbolize larger numbers. Binary numeration also lends itself well to the storage and retrieval of numerical information: on magnetic tape (spots of iron oxide on the tape either being magnetized for a binary “1” or demagnetized for a binary “0”), optical disks (a laser-burned pit in the aluminum foil representing a binary “1” and an unburned spot representing a binary “0”), or a variety of other media types. Before we go on to learning exactly how all this is done in digital circuitry, we need to become more familiar with binary and other associated systems of numeration.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/01%3A_Numeration_Systems/1.02%3A_Systems_of_Numeration.txt
Let’s count from zero to twenty using four different kinds of numeration systems: hash marks, Roman numerals, decimal, and binary: Neither hash marks nor the Roman system are very practical for symbolizing large numbers. Obviously, place-weighted systems such as decimal and binary are more efficient for the task. Notice, though, how much shorter decimal notation is over binary notation, for the same number of quantities. What takes five bits in binary notation only takes two digits in decimal notation. This raises an interesting question regarding different numeration systems: how large of a number can be represented with a limited number of cipher positions, or places? With the crude hash-mark system, the number of places IS the largest number that can be represented, since one hash mark “place” is required for every integer step. For place-weighted systems of numeration, however, the answer is found by taking base of the numeration system (10 for decimal, 2 for binary) and raising it to the power of the number of places. For example, 5 digits in a decimal numeration system can represent 100,000 different integer number values, from 0 to 99,999 (10 to the 5th power = 100,000). 8 bits in a binary numeration system can represent 256 different integer number values, from 0 to 11111111 (binary), or 0 to 255 (decimal), because 2 to the 8th power equals 256. With each additional place position to the number field, the capacity for representing numbers increases by a factor of the base (10 for decimal, 2 for binary). An interesting footnote for this topic is the one of the first electronic digital computers, the Eniac. The designers of the Eniac chose to represent numbers in decimal form, digitally, using a series of circuits called “ring counters” instead of just going with the binary numeration system, in an effort to minimize the number of circuits required to represent and calculate very large numbers. This approach turned out to be counter-productive, and virtually all digital computers since then have been purely binary in design. To convert a number in binary numeration to its equivalent in decimal form, all you have to do is calculate the sum of all the products of bits with their respective place-weight constants. To illustrate: The bit on the far right side is called the Least Significant Bit (LSB), because it stands in the place of the lowest weight (the one’s place). The bit on the far left side is called the Most Significant Bit (MSB), because it stands in the place of the highest weight (the one hundred twenty-eight’s place). Remember, a bit value of “1” means that the respective place weight gets added to the total value, and a bit value of “0” means that the respective place weight does not get added to the total value. With the above example, we have: If we encounter a binary number with a dot (.), called a “binary point” instead of a decimal point, we follow the same procedure, realizing that each place weight to the right of the point is one-half the value of the one to the left of it (just as each place weight to the right of a decimal point is one-tenth the weight of the one to the left of it). For example:
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/01%3A_Numeration_Systems/1.03%3A_Decimal_versus_Binary_Numeration.txt
Because binary numeration requires so many bits to represent relatively small numbers compared to the economy of the decimal system, analyzing the numerical states inside of digital electronic circuitry can be a tedious task. Computer programmers who design sequences of number codes instructing a computer what to do would have a very difficult task if they were forced to work with nothing but long strings of 1’s and 0’s, the “native language” of any digital circuit. To make it easier for human engineers, technicians, and programmers to “speak” this language of the digital world, other systems of place-weighted numeration have been made which are very easy to convert to and from binary. One of those numeration systems is called octal, because it is a place-weighted system with a base of eight. Valid ciphers include the symbols 0, 1, 2, 3, 4, 5, 6, and 7. Each place weight differs from the one next to it by a factor of eight. Another system is called hexadecimal, because it is a place-weighted system with a base of sixteen. Valid ciphers include the normal decimal symbols 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9, plus six alphabetical characters A, B, C, D, E, and F, to make a total of sixteen. As you might have guessed already, each place weight differs from the one before it by a factor of sixteen. Let’s count again from zero to twenty using decimal, binary, octal, and hexadecimal to contrast these systems of numeration: Octal and hexadecimal numeration systems would be pointless if not for their ability to be easily converted to and from binary notation. Their primary purpose in being is to serve as a “shorthand” method of denoting a number represented electronically in binary form. Because the bases of octal (eight) and hexadecimal (sixteen) are even multiples of binary’s base (two), binary bits can be grouped together and directly converted to or from their respective octal or hexadecimal digits. With octal, the binary bits are grouped in three’s (because 23 = 8), and with hexadecimal, the binary bits are grouped in four’s (because 24 = 16): We had to group the bits in three’s, from the binary point left, and from the binary point right, adding (implied) zeros as necessary to make complete 3-bit groups. Each octal digit was translated from the 3-bit binary groups. Binary-to-Hexadecimal conversion is much the same: Here we had to group the bits in four’s, from the binary point left, and from the binary point right, adding (implied) zeros as necessary to make complete 4-bit groups: Likewise, the conversion from either octal or hexadecimal to binary is done by taking each octal or hexadecimal digit and converting it to its equivalent binary (3 or 4 bit) group, then putting all the binary bit groups together. Incidentally, hexadecimal notation is more popular, because binary bit groupings in digital equipment are commonly multiples of eight (8, 16, 32, 64, and 128 bit), which are also multiples of 4. Octal, being based on binary bit groups of 3, doesn’t work out evenly with those common bit group sizings. 1.05: Octal and Hexadecimal to Decimal Conversion Although the prime intent of octal and hexadecimal numeration systems is for the “shorthand” representation of binary numbers in digital electronics, we sometimes have the need to convert from either of those systems to decimal form. Of course, we could simply convert the hexadecimal or octal format to binary, then convert from binary to decimal, since we already know how to do both, but we can also convert directly. Because octal is a base-eight numeration system, each place-weight value differs from either adjacent place by a factor of eight. For example, the octal number 245.37 can be broken down into place values as such: The decimal value of each octal place-weight times its respective cipher multiplier can be determined as follows: The technique for converting hexadecimal notation to decimal is the same, except that each successive place-weight changes by a factor of sixteen. Simply denote each digit’s weight, multiply each hexadecimal digit value by its respective weight (in decimal form), then add up all the decimal values to get a total. For example, the hexadecimal number 30F.A916 can be converted like this: These basic techniques may be used to convert a numerical notation of any base into decimal form, if you know the value of that numeration system’s base.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/01%3A_Numeration_Systems/1.04%3A_Octal_and_Hexadecimal_Numeration.txt
Because octal and hexadecimal numeration systems have bases that are multiples of binary (base 2), conversion back and forth between either hexadecimal or octal and binary is very easy. Also, because we are so familiar with the decimal system, converting binary, octal, or hexadecimal to decimal form is relatively easy (simply add up the products of cipher values and place-weights). However, conversion from decimal to any of these “strange” numeration systems is a different matter. The method which will probably make the most sense is the “trial-and-fit” method, where you try to “fit” the binary, octal, or hexadecimal notation to the desired value as represented in decimal form. For example, let’s say that I wanted to represent the decimal value of 87 in binary form. Let’s start by drawing a binary number field, complete with place-weight values: Well, we know that we won’t have a “1” bit in the 128’s place, because that would immediately give us a value greater than 87. However, since the next weight to the right (64) is less than 87, we know that we must have a “1” there. If we were to make the next place to the right a “1” as well, our total value would be 6410 + 3210, or 9610. This is greater than 8710, so we know that this bit must be a “0”. If we make the next (16’s) place bit equal to “1,” this brings our total value to 6410 + 1610, or 8010, which is closer to our desired value (8710) without exceeding it: By continuing in this progression, setting each lesser-weight bit as we need to come up to our desired total value without exceeding it, we will eventually arrive at the correct figure: This trial-and-fit strategy will work with octal and hexadecimal conversions, too. Let’s take the same decimal figure, 8710, and convert it to octal numeration: If we put a cipher of “1” in the 64’s place, we would have a total value of 6410 (less than 8710). If we put a cipher of “2” in the 64’s place, we would have a total value of 12810 (greater than 8710). This tells us that our octal numeration must start with a “1” in the 64’s place: Now, we need to experiment with cipher values in the 8’s place to try and get a total (decimal) value as close to 87 as possible without exceeding it. Trying the first few cipher options, we get: A cipher value of “3” in the 8’s place would put us over the desired total of 8710, so “2” it is! Now, all we need to make a total of 87 is a cipher of “7” in the 1’s place: Of course, if you were paying attention during the last section on octal/binary conversions, you will realize that we can take the binary representation of (decimal) 8710, which we previously determined to be 10101112, and easily convert from that to octal to check our work: Can we do decimal-to-hexadecimal conversion the same way? Sure, but who would want to? This method is simple to understand, but laborious to carry out. There is another way to do these conversions, which is essentially the same (mathematically), but easier to accomplish. This other method uses repeated cycles of division (using decimal notation) to break the decimal numeration down into multiples of binary, octal, or hexadecimal place-weight values. In the first cycle of division, we take the original decimal number and divide it by the base of the numeration system that we’re converting to (binary=2 octal=8, hex=16). Then, we take the whole-number portion of division result (quotient) and divide it by the base value again, and so on, until we end up with a quotient of less than 1. The binary, octal, or hexadecimal digits are determined by the “remainders” left over by each division step. Let’s see how this works for binary, with the decimal example of 8710: The binary bits are assembled from the remainders of the successive division steps, beginning with the LSB and proceeding to the MSB. In this case, we arrive at a binary notation of 10101112. When we divide by 2, we will always get a quotient ending with either “.0” or “.5”, i.e. a remainder of either 0 or 1. As was said before, this repeat-division technique for conversion will work for numeration systems other than binary. If we were to perform successive divisions using a different number, such as 8 for conversion to octal, we will necessarily get remainders between 0 and 7. Let’s try this with the same decimal number, 8710: We can use a similar technique for converting numeration systems dealing with quantities less than 1, as well. For converting a decimal number less than 1 into binary, octal, or hexadecimal, we use repeated multiplication, taking the integer portion of the product in each step as the next digit of our converted number. Let’s use the decimal number 0.812510 as an example, converting to binary: As with the repeat-division process for integers, each step gives us the next digit (or bit) further away from the “point.” With integer (division), we worked from the LSB to the MSB (right-to-left), but with repeated multiplication, we worked from the left to the right. To convert a decimal number greater than 1, with a < 1 component, we must use both techniques, one at a time. Take the decimal example of 54.4062510, converting to binary:
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/01%3A_Numeration_Systems/1.06%3A_Conversion_From_Decimal_Numeration.txt
It is imperative to understand that the type of numeration system used to represent numbers has no impact on the outcome of any arithmetical function (addition, subtraction, multiplication, division, roots, powers, or logarithms). A number is a number is a number; one plus one will always equal two (so long as we’re dealing with real numbers), no matter how you symbolize one, one, and two. A prime number in decimal form is still prime if its shown in binary form, or octal, or hexadecimal. π is still the ratio between the circumference and diameter of a circle, no matter what symbol(s) you use to denote its value. The essential functions and interrelations of mathematics are unaffected by the particular system of symbols we might choose to represent quantities. This distinction between numbers and systems of numeration is critical to understand. The essential distinction between the two is much like that between an object and the spoken word(s) we associate with it. A house is still a house regardless of whether we call it by its English name house or its Spanish name casa. The first is the actual thing, while the second is merely the symbol for the thing. That being said, performing a simple arithmetic operation such as addition (longhand) in binary form can be confusing to a person accustomed to working with decimal numeration only. In this lesson, we’ll explore the techniques used to perform simple arithmetic functions on binary numbers, since these techniques will be employed in the design of electronic circuits to do the same. You might take longhand addition and subtraction for granted, having used a calculator for so long, but deep inside that calculator’s circuitry, all those operations are performed “longhand,” using binary numeration. To understand how that’s accomplished, we need to review the basics of arithmetic. 2.02: Binary Addition The Rules of Binary Addition Adding binary numbers is a very simple task, and very similar to the longhand addition of decimal numbers. As with decimal numbers, you start by adding the bits (digits) one column, or place weight, at a time, from right to left. Unlike decimal addition, there is little to memorize in the way of rules for the addition of binary bits: Just as with decimal addition, when the sum in one column is a two-bit (two-digit) number, the least significant figure is written as part of the total sum and the most significant figure is “carried” to the next left column. Consider the following examples: The addition problem on the left did not require any bits to be carried since the sum of bits in each column was either 1 or 0, not 10 or 11. In the other two problems, there definitely were bits to be carried, but the process of addition is still quite simple. Binary Addition is the Foundation of Digital Computers As we’ll see later, there are ways that electronic circuits can be built to perform this very task of addition, by representing each bit of each binary number as a voltage signal (either “high,” for a 1; or “low” for a 0). This is the very foundation of all the arithmetic which modern digital computers perform.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/02%3A_Binary_Arithmetic/2.01%3A_Numbers_versus_Numeration.txt
With addition being easily accomplished, we can perform the operation of subtraction with the same technique simply by making one of the numbers negative. For example, the subtraction problem of 7 - 5 is essentially the same as the addition problem 7 + (-5). Since we already know how to represent positive numbers in binary, all we need to know now is how to represent their negative counterparts and we’ll be able to subtract. Usually we represent a negative decimal number by placing a minus sign directly to the left of the most significant digit, just as in the example above, with -5. However, the whole purpose of using binary notation is for constructing on/off circuits that can represent bit values in terms of voltage (2 alternative values: either “high” or “low”). In this context, we don’t have the luxury of a third symbol such as a “minus” sign, since these circuits can only be on or off (two possible states). One solution is to reserve a bit (circuit) that does nothing but represent the mathematical sign: As you can see, we have to be careful when we start using bits for any purpose other than standard place-weighted values. Otherwise, 11012 could be misinterpreted as the number thirteen when in fact we mean to represent negative five. To keep things straight here, we must first decide how many bits are going to be needed to represent the largest numbers we’ll be dealing with, and then be sure not to exceed that bit field length in our arithmetic operations. For the above example, I’ve limited myself to the representation of numbers from negative seven (11112) to positive seven (01112), and no more, by making the fourth bit the “sign” bit. Only by first establishing these limits can I avoid confusion of a negative number with a larger, positive number. Representing negative five as 11012 is an example of the sign-magnitude system of negative binary numeration. By using the leftmost bit as a sign indicator and not a place-weighted value, I am sacrificing the “pure” form of binary notation for something that gives me a practical advantage: the representation of negative numbers. The leftmost bit is read as the sign, either positive or negative, and the remaining bits are interpreted according to the standard binary notation: left to right, place weights in multiples of two. As simple as the sign-magnitude approach is, it is not very practical for arithmetic purposes. For instance, how do I add a negative five (11012) to any other number, using the standard technique for binary addition? I’d have to invent a new way of doing addition in order for it to work, and if I do that, I might as well just do the job with longhand subtraction; there’s no arithmetical advantage to using negative numbers to perform subtraction through addition if we have to do it with sign-magnitude numeration, and that was our goal! There’s another method for representing negative numbers which works with our familiar technique of longhand addition, and also happens to make more sense from a place-weighted numeration point of view, called complementation. With this strategy, we assign the leftmost bit to serve a special purpose, just as we did with the sign-magnitude approach, defining our number limits just as before. However, this time, the leftmost bit is more than just a sign bit; rather, it possesses a negative place-weight value. For example, a value of negative five would be represented as such: With the right three bits being able to represent a magnitude from zero through seven, and the leftmost bit representing either zero or negative eight, we can successfully represent any integer number from negative seven (10012 = -810 + 110 = -710) to positive seven (01112 = 010 + 710 = 710). Representing positive numbers in this scheme (with the fourth bit designated as the negative weight) is no different from that of ordinary binary notation. However, representing negative numbers is not quite as straightforward: Note that the negative binary numbers in the right column, being the sum of the right three bits’ total plus the negative eight of the leftmost bit, don’t “count” in the same progression as the positive binary numbers in the left column. Rather, the right three bits have to be set at the proper value to equal the desired (negative) total when summed with the negative eight place value of the leftmost bit. Those right three bits are referred to as the two’s complement of the corresponding positive number. Consider the following comparison: In this case, with the negative weight bit being the fourth bit (place value of negative eight), the two’s complement for any positive number will be whatever value is needed to add to negative eight to make that positive value’s negative equivalent. Thankfully, there’s an easy way to figure out the two’s complement for any binary number: simply invert all the bits of that number, changing all 1’s to 0’s and vice versa (to arrive at what is called the one’s complement) and then add one! For example, to obtain the two’s complement of five (1012), we would first invert all the bits to obtain 0102 (the “one’s complement”), then add one to obtain 0112, or -510 in three-bit, two’s complement form. Interestingly enough, generating the two’s complement of a binary number works the same if you manipulate all the bits, including the leftmost (sign) bit at the same time as the magnitude bits. Let’s try this with the former example, converting a positive five to a negative five, but performing the complementation process on all four bits. We must be sure to include the 0 (positive) sign bit on the original number, five (01012). First, inverting all bits to obtain the one’s complement: 10102. Then, adding one, we obtain the final answer: 10112, or -510 expressed in four-bit, two’s complement form. It is critically important to remember that the place of the negative-weight bit must be already determined before any two’s complement conversions can be done. If our binary numeration field were such that the eighth bit was designated as the negative-weight bit (100000002), we’d have to determine the two’s complement based on all seven of the other bits. Here, the two’s complement of five (00001012) would be 11110112. A positive five in this system would be represented as 000001012, and a negative five as 111110112.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/02%3A_Binary_Arithmetic/2.03%3A_Negative_Binary_Numbers.txt
We can subtract one binary number from another by using the standard techniques adapted for decimal numbers (subtraction of each bit pair, right to left, “borrowing” as needed from bits to the left). However, if we can leverage the already familiar (and easier) technique of binary addition to subtract, that would be better. As we just learned, we can represent negative binary numbers by using the “two’s complement” method and a negative place-weight bit. Here, we’ll use those negative binary numbers to subtract through addition. Here’s a sample problem: If all we need to do is represent seven and negative five in binary (two’s complemented) form, all we need is three bits plus the negative-weight bit: Now, let’s add them together: Since we’ve already defined our number bit field as three bits plus the negative-weight bit, the fifth bit in the answer (1) will be discarded to give us a result of 00102, or positive two, which is the correct answer. Another way to understand why we discard that extra bit is to remember that the leftmost bit of the lower number possesses a negative weight, in this case equal to negative eight. When we add these two binary numbers together, what we’re actually doing with the MSBs is subtracting the lower number’s MSB from the upper number’s MSB. In subtraction, one never “carries” a digit or bit on to the next left place-weight. Let’s try another example, this time with larger numbers. If we want to add -2510 to 1810, we must first decide how large our binary bit field must be. To represent the largest (absolute value) number in our problem, which is twenty-five, we need at least five bits, plus a sixth bit for the negative-weight bit. Let’s start by representing positive twenty-five, then finding the two’s complement and putting it all together into one numeration: Essentially, we’re representing negative twenty-five by using the negative-weight (sixth) bit with a value of negative thirty-two, plus positive seven (binary 1112). Now, let’s represent positive eighteen in binary form, showing all six bits: Since there were no “extra” bits on the left, there are no bits to discard. The leftmost bit on the answer is a 1, which means that the answer is negative, in two’s complement form, as it should be. Converting the answer to decimal form by summing all the bits times their respective weight values, we get: Indeed -710 is the proper sum of -2510 and 1810. 2.05: Binary Overflow One caveat with signed binary numbers is that of overflow, where the answer to an addition or subtraction problem exceeds the magnitude which can be represented with the alloted number of bits. Remember that the place of the sign bit is fixed from the beginning of the problem. With the last example problem, we used five binary bits to represent the magnitude of the number, and the left-most (sixth) bit as the negative-weight, or sign, bit. With five bits to represent magnitude, we have a representation range of 25, or thirty-two integer steps from 0 to maximum. This means that we can represent a number as high as +3110 (0111112), or as low as -3210 (1000002). If we set up an addition problem with two binary numbers, the sixth bit used for sign, and the result either exceeds +3110 or is less than -3210, our answer will be incorrect. Let’s try adding 1710 and 1910 to see how this overflow condition works for excessive positive numbers: The answer (1001002), interpreted with the sixth bit as the -3210 place, is actually equal to -2810, not +3610 as we should get with +1710 and +1910 added together! Obviously, this is not correct. What went wrong? The answer lies in the restrictions of the six-bit number field within which we’re working Since the magnitude of the true and proper sum (3610) exceeds the allowable limit for our designated bit field, we have an overflow error. Simply put, six places doesn’t give enough bits to represent the correct sum, so whatever figure we obtain using the strategy of discarding the left-most “carry” bit will be incorrect. A similar error will occur if we add two negative numbers together to produce a sum that is too low for our six-bit binary field. Let’s try adding -1710and -1910 together to see how this works (or doesn’t work, as the case may be!): The (incorrect) answer is a positive twenty-eight. The fact that the real sum of negative seventeen and negative nineteen was too low to be properly represented with a five bit magnitude field and a sixth sign bit is the root cause of this difficulty. Let’s try these two problems again, except this time using the seventh bit for a sign bit, and allowing the use of 6 bits for representing the magnitude: By using bit fields sufficiently large to handle the magnitude of the sums, we arrive at the correct answers. In these sample problems we’ve been able to detect overflow errors by performing the addition problems in decimal form and comparing the results with the binary answers. For example, when adding +1710 and +1910together, we knew that the answer was supposed to be +3610, so when the binary sum checked out to be -2810, we knew that something had to be wrong. Although this is a valid way of detecting overflow, it is not very efficient. After all, the whole idea of complementation is to be able to reliably add binary numbers together and not have to double-check the result by adding the same numbers together in decimal form! This is especially true for the purpose of building electronic circuits to add binary quantities together: the circuit has to be able to check itself for overflow without the supervision of a human being who already knows what the correct answer is. What we need is a simple error-detection method that doesn’t require any additional arithmetic. Perhaps the most elegant solution is to check for the sign of the sum and compare it against the signs of the numbers added. Obviously, two positive numbers added together should give a positive result, and two negative numbers added together should give a negative result. Notice that whenever we had a condition of overflow in the example problems, the sign of the sum was always opposite of the two added numbers: +1710 plus +1910 giving -2810, or -1710 plus -1910 giving +2810. By checking the signs alone we are able to tell that something is wrong. But what about cases where a positive number is added to a negative number? What sign should the sum be in order to be correct. Or, more precisely, what sign of sum would necessarily indicate an overflow error? The answer to this is equally elegant: there will never be an overflow error when two numbers of opposite signs are added together! The reason for this is apparent when the nature of overflow is considered. Overflow occurs when the magnitude of a number exceeds the range allowed by the size of the bit field. The sum of two identically-signed numbers may very well exceed the range of the bit field of those two numbers, and so in this case overflow is a possibility. However, if a positive number is added to a negative number, the sum will always be closer to zero than either of the two added numbers: its magnitude must be less than the magnitude of either original number, and so overflow is impossible. Fortunately, this technique of overflow detection is easily implemented in electronic circuitry, and it is a standard feature in digital adder circuits: a subject for a later chapter.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/02%3A_Binary_Arithmetic/2.04%3A_Binary_Subtraction.txt
The singular reason for learning and using the binary numeration system in electronics is to understand how to design, build, and troubleshoot circuits that represent and process numerical quantities in digital form. Since the bivalent (two-valued) system of binary bit numeration lends itself so easily to representation by “on” and “off” transistor states (saturation and cutoff, respectively), it makes sense to design and build circuits leveraging this principle to perform binary calculations. If we were to build a circuit to represent a binary number, we would have to allocate enough transistor circuits to represent as many bits as we desire. In other words, in designing a digital circuit, we must first decide how many bits (maximum) we would like to be able to represent, since each bit requires one on/off circuit to represent it. This is analogous to designing an abacus to digitally represent decimal numbers: we must decide how many digits we wish to handle in this primitive “calculator” device, for each digit requires a separate rod with its own beads. A ten-rod abacus would be able to represent a ten-digit decimal number, or a maxmium value of 9,999,999,999. If we wished to represent a larger number on this abacus, we would be unable to, unless additional rods could be added to it. In digital, electronic computer design, it is common to design the system for a common “bit width:” a maximum number of bits allocated to represent numerical quantities. Early digital computers handled bits in groups of four or eight. More modern systems handle numbers in clusters of 32 bits or more. To more conveniently express the “bit width” of such clusters in a digital computer, specific labels were applied to the more common groupings. Eight bits, grouped together to form a single binary quantity, is known as a byte. Four bits, grouped together as one binary number, is known by the humorous title of nibble, often spelled as nybble. A multitude of terms have followed byte and nibble for labeling specfiic groupings of binary bits. Most of the terms shown here are informal, and have not been made “authoritative” by any standards group or other sanctioning body. However, their inclusion into this chapter is warranted by their occasional appearance in technical literature, as well as the levity they add to an otherwise dry subject: • Bit: A single, bivalent unit of binary notation. Equivalent to a decimal “digit.” • Crumb, Tydbit, or Tayste: Two bits. • Nibble, or Nybble: Four bits. • Nickle: Five bits. • Byte: Eight bits. • Deckle: Ten bits. • Playte: Sixteen bits. • Dynner: Thirty-two bits. • Word: (system dependent). The most ambiguous term by far is word, referring to the standard bit-grouping within a particular digital system. For a computer system using a 32 bit-wide “data path,” a “word” would mean 32 bits. If the system used 16 bits as the standard grouping for binary quantities, a “word” would mean 16 bits. The terms playte and dynner, by contrast, always refer to 16 and 32 bits, respectively, regardless of the system context in which they are used. Context dependence is likewise true for derivative terms of word, such as double word and longword (both meaning twice the standard bit-width), half-word (half the standard bit-width), and quad (meaning four times the standard bit-width). One humorous addition to this somewhat boring collection of word-derivatives is the term chawmp, which means the same as half-word. For example, a chawmp would be 16 bits in the context of a 32-bit digital system, and 18 bits in the context of a 36-bit system. Also, the term gawble is sometimes synonymous with word. Definitions for bit grouping terms were taken from Eric S. Raymond’s “Jargon Lexicon,” an indexed collection of terms—both common and obscure—germane to the world of computer programming.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/02%3A_Binary_Arithmetic/2.06%3A_Bit_Grouping.txt
While the binary numeration system is an interesting mathematical abstraction, we haven’t yet seen its practical application to electronics. This chapter is devoted to just that: practically applying the concept of binary bits to circuits. What makes binary numeration so important to the application of digital electronics is the ease in which bits may be represented in physical terms. Because a binary bit can only have one of two different values, either 0 or 1, any physical medium capable of switching between two saturated states may be used to represent a bit. Consequently, any physical system capable of representing binary bits is able to represent numerical quantities and potentially has the ability to manipulate those numbers. This is the basic concept underlying digital computing. Electronic circuits are physical systems that lend themselves well to the representation of binary numbers. Transistors, when operated at their bias limits, may be in one of two different states: either cut off (no controlled current) or saturation (maximum controlled current). If a transistor circuit is designed to maximize the probability of falling into either one of these states (and not operating in the linear, or active, mode), it can serve as a physical representation of a binary bit. A voltage signal measured at the output of such a circuit may also serve as a representation of a single bit, a low voltage representing a binary “0” and a (relatively) high voltage representing a binary “1.” Note the following transistor circuit: In this circuit, the transistor is in a state of saturation by virtue of the applied input voltage (5 volts) through the two-position switch. Because its saturated, the transistor drops very little voltage between collector and emitter, resulting in an output voltage of (practically) 0 volts. If we were using this circuit to represent binary bits, we would say that the input signal is a binary “1” and that the output signal is a binary “0.” Any voltage close to full supply voltage (measured in reference to ground, of course) is considered a “1” and a lack of voltage is considered a “0.” Alternative terms for these voltage levels are high (same as a binary “1”) and low (same as a binary “0”). A general term for the representation of a binary bit by a circuit voltage is logic level. Moving the switch to the other position, we apply a binary “0” to the input and receive a binary “1” at the output: What we’ve created here with a single transistor is a circuit generally known as a logic gate, or simply gate.A gate is a special type of amplifier circuit designed to accept and generate voltage signals corresponding to binary 1’s and 0’s. As such, gates are not intended to be used for amplifying analog signals (voltage signals between 0 and full voltage). Used together, multiple gates may be applied to the task of binary number storage (memory circuits) or manipulation (computing circuits), each gate’s output representing one bit of a multi-bit binary number. Just how this is done is a subject for a later chapter. Right now it is important to focus on the operation of individual gates. The gate shown here with the single transistor is known as an inverter, or NOT gate because it outputs the exact opposite digital signal as what is input. For convenience, gate circuits are generally represented by their own symbols rather than by their constituent transistors and resistors. The following is the symbol for an inverter: An alternative symbol for an inverter is shown here: Notice the triangular shape of the gate symbol, much like that of an operational amplifier. As was stated before, gate circuits actually are amplifiers. The small circle or “bubble” shown on either the input or output terminal is standard for representing the inversion function. As you might suspect, if we were to remove the bubble from the gate symbol, leaving only a triangle, the resulting symbol would no longer indicate inversion, but merely direct amplification. Such a symbol and such a gate actually do exist, and it is called a buffer, the subject of the next section. Like an operational amplifier symbol, input and output connections are shown as single wires, the implied reference point for each voltage signal being “ground.” In digital gate circuits, ground is almost always the negative connection of a single voltage source (power supply). Dual, or “split,” power supplies are seldom used in gate circuitry. Because gate circuits are amplifiers, they require a source of power to operate. Like operational amplifiers, the power supply connections for digital gates are often omitted from the symbol for simplicity’s sake. If we were to show all the necessary connections needed for operating this gate, the schematic would look something like this: Power supply conductors are rarely shown in gate circuit schematics, even if the power supply connections at each gate are. Minimizing lines in our schematic, we get this: “Vcc” stands for the constant voltage supplied to the collector of a bipolar junction transistor circuit, in reference to ground. Those points in a gate circuit marked by the label “Vcc” are all connected to the same point, and that point is the positive terminal of a DC voltage source, usually 5 volts. As we will see in other sections of this chapter, there are quite a few different types of logic gates, most of which have multiple input terminals for accepting more than one signal. The output of any gate is dependent on the state of its input(s) and its logical function. Expressing Gate Circuit Functions with Truth Tables One common way to express the particular function of a gate circuit is called a truth table. Truth tables show all combinations of input conditions in terms of logic level states (either “high” or “low,” “1” or “0,” for each input terminal of the gate), along with the corresponding output logic level, either “high” or “low.” For the inverter, or NOT, circuit just illustrated, the truth table is very simple indeed: Truth tables for more complex gates are, of course, larger than the one shown for the NOT gate. A gate’s truth table must have as many rows as there are possibilities for unique input combinations. For a single-input gate like the NOT gate, there are only two possibilities, 0 and 1. For a two input gate, there are fourpossibilities (00, 01, 10, and 11), and thus four rows to the corresponding truth table. For a three-input gate, there are eight possibilities (000, 001, 010, 011, 100, 101, 110, and 111), and thus a truth table with eight rows are needed. The mathematically inclined will realize that the number of truth table rows needed for a gate is equal to 2 raised to the power of the number of input terminals. Review • In digital circuits, binary bit values of 0 and 1 are represented by voltage signals measured in reference to a common circuit point called ground. An absence of voltage represents a binary “0” and the presence of full DC supply voltage represents a binary “1.” • A logic gate, or simply gate, is a special form of amplifier circuit designed to input and output logic levelvoltages (voltages intended to represent binary bits). Gate circuits are most commonly represented in a schematic by their own unique symbols rather than by their constituent transistors and resistors. • Just as with operational amplifiers, the power supply connections to gates are often omitted in schematic diagrams for the sake of simplicity. • A truth table is a standard way of representing the input/output relationships of a gate circuit, listing all the possible input logic level combinations with their respective output logic levels.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/03%3A_Logic_Gates/3.01%3A_Digital_Signals_and_Gates.txt
The single-transistor inverter circuit illustrated earlier is actually too crude to be of practical use as a gate. Real inverter circuits contain more than one transistor to maximize voltage gain (so as to ensure that the final output transistor is either in full cutoff or full saturation), and other components designed to reduce the chance of accidental damage. Shown here is a schematic diagram for a real inverter circuit, complete with all necessary components for efficient and reliable operation: This circuit is composed exclusively of resistors, diodes and bipolar transistors. Bear in mind that other circuit designs are capable of performing the NOT gate function, including designs substituting field-effect transistors for bipolar (discussed later in this chapter). Let’s analyze this circuit for the condition where the input is “high,” or in a binary “1” state. We can simulate this by showing the input terminal connected to Vcc through a switch: In this case, diode D1 will be reverse-biased, and therefore not conduct any current. In fact, the only purpose for having D1 in the circuit is to prevent transistor damage in the case of a negative voltage being impressed on the input (a voltage that is negative, rather than positive, with respect to ground). With no voltage between the base and emitter of transistor Q1, we would expect no current through it, either. However, as strange as it may seem, transistor Q1 is not being used as is customary for a transistor. In reality, Q1 is being used in this circuit as nothing more than a back-to-back pair of diodes. The following schematic shows the real function of Q1: The purpose of these diodes is to “steer” current to or away from the base of transistor Q2, depending on the logic level of the input. Exactly how these two diodes are able to “steer” current isn’t exactly obvious at first inspection, so a short example may be necessary for understanding. Suppose we had the following diode/resistor circuit, representing the base-emitter junctions of transistors Q2 and Q4 as single diodes, stripping away all other portions of the circuit so that we can concentrate on the current “steered” through the two back-to-back diodes: With the input switch in the “up” position (connected to Vcc), it should be obvious that there will be no current through the left steering diode of Q1, because there isn’t any voltage in the switch-diode-R1-switch loop to motivate electrons to flow. However, there will be current through the right steering diode of Q1, as well as through Q2‘s base-emitter diode junction and Q4‘s base-emitter diode junction: This tells us that in the real gate circuit, transistors Q2 and Q4 will have base current, which will turn them on to conduct collector current. The total voltage dropped between the base of Q1 (the node joining the two back-to-back steering diodes) and ground will be about 2.1 volts, equal to the combined voltage drops of three PN junctions: the right steering diode, Q2‘s base-emitter diode, and Q4‘s base-emitter diode. Now, let’s move the input switch to the “down” position and see what happens: If we were to measure current in this circuit, we would find that all of the current goes through the left steering diode of Q1 and none of it through the right diode. Why is this? It still appears as though there is a complete path for current through Q4‘s diode, Q2‘s diode, the right diode of the pair, and R1, so why will there be no current through that path? Remember that PN junction diodes are very nonlinear devices: they do not even begin to conduct current until the forward voltage applied across them reaches a certain minimum quantity, approximately 0.7 volts for silicon and 0.3 volts for germanium. And then when they begin to conduct current, they will not drop substantially more than 0.7 volts. When the switch in this circuit is in the “down” position, the left diode of the steering diode pair is fully conducting, and so it drops about 0.7 volts across it and no more. Recall that with the switch in the “up” position (transistors Q2 and Q4 conducting), there were about 2.1 volts dropped between those same two points (Q1‘s base and ground), which also happens to be the minimum voltage necessary to forward-bias three series-connected silicon PN junctions into a state of conduction. The 0.7 volts provided by the left diode’s forward voltage drop is simply insufficient to allow any electron flow through the series string of the right diode, Q2‘s diode, and the R3//Q4 diode parallel subcircuit, and so no electrons flow through that path. With no current through the bases of either transistor Q2 or Q4, neither one will be able to conduct collector current: transistors Q2 and Q4 will both be in a state of cutoff. Consequently, this circuit configuration allows 100 percent switching of Q2 base current (and therefore control over the rest of the gate circuit, including voltage at the output) by diversion of current through the left steering diode. In the case of our example gate circuit, the input is held “high” by the switch (connected to Vcc), making the left steering diode (zero voltage dropped across it). However, the right steering diode is conducting current through the base of Q2, through resistor R1: With base current provided, transistor Q2 will be turned “on.” More specifically, it will be saturated by virtue of the more-than-adequate current allowed by R1 through the base. With Q2 saturated, resistor R3 will be dropping enough voltage to forward-bias the base-emitter junction of transistor Q4, thus saturating it as well: With Q4 saturated, the output terminal will be almost directly shorted to ground, leaving the output terminal at a voltage (in reference to ground) of almost 0 volts, or a binary “0” (“low”) logic level. Due to the presence of diode D2, there will not be enough voltage between the base of Q3 and its emitter to turn it on, so it remains in cutoff. Let’s see now what happens if we reverse the input’s logic level to a binary “0” by actuating the input switch: Now there will be current through the left steering diode of Q1 and no current through the right steering diode. This eliminates current through the base of Q2, thus turning it off. With Q2 off, there is no longer a path for Q4 base current, so Q4 goes into cutoff as well. Q3, on the other hand, now has sufficient voltage dropped between its base and ground to forward-bias its base-emitter junction and saturate it, thus raising the output terminal voltage to a “high” state. In actuality, the output voltage will be somewhere around 4 volts depending on the degree of saturation and any load current, but still high enough to be considered a “high” (1) logic level. With this, our simulation of the inverter circuit is complete: a “1” in gives a “0” out, and vice versa. The astute observer will note that this inverter circuit’s input will assume a “high” state of left floating (not connected to either Vcc or ground). With the input terminal left unconnected, there will be no current through the left steering diode of Q1, leaving all of R1‘s current to go through Q2‘s base, thus saturating Q2 and driving the circuit output to a “low” state: The tendency for such a circuit to assume a high input state if left floating is one shared by all gate circuits based on this type of design, known as Transistor-to-Transistor Logic, or TTL. This characteristic may be taken advantage of in simplifying the design of a gate’s output circuitry, knowing that the outputs of gates typically drive the inputs of other gates. If the input of a TTL gate circuit assumes a high state when floating, then the output of any gate driving a TTL input need only provide a path to ground for a low state and be floating for a high state. This concept may require further elaboration for full understanding, so I will explore it in detail here. A gate circuit as we have just analyzed has the ability to handle output current in two directions: in and out. Technically, this is known as sourcing and sinking current, respectively. When the gate output is high, there is continuity from the output terminal to Vcc through the top output transistor (Q3), allowing electrons to flow from ground, through a load, into the gate’s output terminal, through the emitter of Q3, and eventually up to the Vcc power terminal (positive side of the DC power supply): To simplify this concept, we may show the output of a gate circuit as being a double-throw switch, capable of connecting the output terminal either to Vcc or ground, depending on its state. For a gate outputting a “high” logic level, the combination of Q3 saturated and Q4 cutoff is analogous to a double-throw switch in the “Vcc” position, providing a path for current through a grounded load: Please note that this two-position switch shown inside the gate symbol is representative of transistors Q3and Q4 alternately connecting the output terminal to Vcc or ground, not of the switch previously shown sending an input signal to the gate! Conversely, when a gate circuit is outputting a “low” logic level to a load, it is analogous to the double-throw switch being set in the “ground” position. Current will then be going the other way if the load resistance connects to Vcc: from ground, through the emitter of Q4, out the output terminal, through the load resistance, and back to Vcc. In this condition, the gate is said to be sinking current: The combination of Q3 and Q4 working as a “push-pull” transistor pair (otherwise known as a totem pole output) has the ability to either source current (draw in current to Vcc) or sink current (output current from ground) to a load. However, a standard TTL gate input never needs current to be sourced, only sunk. That is, since a TTL gate input naturally assumes a high state if left floating, any gate output driving a TTL input need only sink current to provide a “0” or “low” input, and need not source current to provide a “1” or a “high” logic level at the input of the receiving gate: This means we have the option of simplifying the output stage of a gate circuit so as to eliminate Q3altogether. The result is known as an open-collector output: To designate open-collector output circuitry within a standard gate symbol, a special marker is used. Shown here is the symbol for an inverter gate with open-collector output: Please keep in mind that the “high” default condition of a floating gate input is only true for TTL circuitry, and not necessarily for other types, especially for logic gates constructed of field-effect transistors. Review • An inverter, or NOT, gate is one that outputs the opposite state as what is input. That is, a “low” input (0) gives a “high” output (1), and vice versa. • Gate circuits constructed of resistors, diodes and bipolar transistors as illustrated in this section are called TTL. TTL is an acronym standing for Transistor-to-Transistor Logic. There are other design methodologies used in gate circuits, some which use field-effect transistors rather than bipolar transistors. • A gate is said to be sourcing current when it provides a path for current between the output terminal and the positive side of the DC power supply (Vcc). In other words, it is connecting the output terminal to the power source (+V). • A gate is said to be sinking current when it provides a path for current between the output terminal and ground. In other words, it is grounding (sinking) the output terminal. • Gate circuits with totem pole output stages are able to both source and sink current. Gate circuits with open-collector output stages are only able to sink current, and not source current. Open-collector gates are practical when used to drive TTL gate inputs because TTL inputs don’t require current sourcing.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/03%3A_Logic_Gates/3.02%3A_The_NOT_Gate.txt
If we were to connect two inverter gates together so that the output of one fed into the input of another, the two inversion functions would “cancel” each other out so that there would be no inversion from input to final output: While this may seem like a pointless thing to do, it does have practical application. Remember that gate circuits are signal amplifiers, regardless of what logic function they may perform. A weak signal source (one that is not capable of sourcing or sinking very much current to a load) may be boosted by means of two inverters like the pair shown in the previous illustration. The logic level is unchanged, but the full current-sourcing or -sinking capabilities of the final inverter are available to drive a load resistance if needed. For this purpose, a special logic gate called a buffer is manufactured to perform the same function as two inverters. Its symbol is simply a triangle, with no inverting “bubble” on the output terminal: The internal schematic diagram for a typical open-collector buffer is not much different from that of a simple inverter: only one more common-emitter transistor stage is added to re-invert the output signal. Let’s analyze this circuit for two conditions: an input logic level of “1” and an input logic level of “0.” First, a “high” (1) input: As before with the inverter circuit, the “high” input causes no conduction through the left steering diode of Q1(emitter-to-base PN junction). All of R1‘s current goes through the base of transistor Q2, saturating it: Having Q2 saturated causes Q3 to be saturated as well, resulting in very little voltage dropped between the base and emitter of the final output transistor Q4. Thus, Q4 will be in cutoff mode, conducting no current. The output terminal will be floating (neither connected to ground nor Vcc), and this will be equivalent to a “high” state on the input of the next TTL gate that this one feeds in to. Thus, a “high” input gives a “high” output. With a “low” input signal (input terminal grounded), the analysis looks something like this: All of R1‘s current is now diverted through the input switch, thus eliminating base current through Q2. This forces transistor Q2 into cutoff so that no base current goes through Q3 either. With Q3 cutoff as well, Q4 is will be saturated by the current through resistor R4, thus connecting the output terminal to ground, making it a “low” logic level. Thus, a “low” input gives a “low” output. The schematic diagram for a buffer circuit with totem pole output transistors is a bit more complex, but the basic principles, and certainly the truth table, are the same as for the open-collector circuit: Review • Two inverter, or NOT, gates connected in “series” so as to invert, then re-invert, a binary bit perform the function of a buffer. Buffer gates merely serve the purpose of signal amplification: taking a “weak” signal source that isn’t capable of sourcing or sinking much current, and boosting the current capacity of the signal so as to be able to drive a load. • Buffer circuits are symbolized by a triangle symbol with no inverter “bubble.” • Buffers, like inverters, may be made in open-collector output or totem pole output forms.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/03%3A_Logic_Gates/3.03%3A_The_%E2%80%9CBuffer%E2%80%9D_Gate.txt
The Use of Logic Gate Inverters and buffers exhaust the possibilities for single-input gate circuits. What more can be done with a single logic signal but to buffer it or invert it? To explore more logic gate possibilities, we must add more input terminals to the circuit(s). Adding more input terminals to a logic gate increases the number of input state possibilities. With a single-input gate such as the inverter or buffer, there can only be two possible input states: either the input is “high” (1) or it is “low” (0). As was mentioned previously in this chapter, a two input gate has four possibilities (00, 01, 10, and 11). A three-input gate has eight possibilities (000, 001, 010, 011, 100, 101, 110, and 111) for input states. The number of possible input states is equal to two to the power of the number of inputs: This increase in the number of possible input states obviously allows for more complex gate behavior. Now, instead of merely inverting or amplifying (buffering) a single “high” or “low” logic level, the output of the gate will be determined by whatever combination of 1’s and 0’s is present at the input terminals. Since so many combinations are possible with just a few input terminals, there are many different types of multiple-input gates, unlike single-input gates which can only be inverters or buffers. Each basic gate type will be presented in this section, showing its standard symbol, truth table, and practical operation. The actual TTL circuitry of these different gates will be explored in subsequent sections. The AND Gate One of the easiest multiple-input gates to understand is the AND gate, so-called because the output of this gate will be “high” (1) if and only if all inputs (first input and the second input and . . .) are “high” (1). If any input(s) is “low” (0), the output is guaranteed to be in a “low” state as well. In case you might have been wondering, AND gates are made with more than three inputs, but this is less common than the simple two-input variety. A two-input AND gate’s truth table looks like this: What this truth table means in practical terms is shown in the following sequence of illustrations, with the 2-input AND gate subjected to all possibilities of input logic levels. An LED (Light-Emitting Diode) provides visual indication of the output logic level: It is only with all inputs raised to “high” logic levels that the AND gate’s output goes “high,” thus energizing the LED for only one out of the four input combination states. The NAND Gate A variation on the idea of the AND gate is called the NAND gate. The word “NAND” is a verbal contraction of the words NOT and AND. Essentially, a NAND gate behaves the same as an AND gate with a NOT (inverter) gate connected to the output terminal. To symbolize this output signal inversion, the NAND gate symbol has a bubble on the output line. The truth table for a NAND gate is as one might expect, exactly opposite as that of an AND gate: As with AND gates, NAND gates are made with more than two inputs. In such cases, the same general principle applies: the output will be “low” (0) if and only if all inputs are “high” (1). If any input is “low” (0), the output will go “high” (1). The OR Gate Our next gate to investigate is the OR gate, so-called because the output of this gate will be “high” (1) if anyof the inputs (first input or the second input or . . .) are “high” (1). The output of an OR gate goes “low” (0) if and only if all inputs are “low” (0). A two-input OR gate’s truth table looks like this: The following sequence of illustrations demonstrates the OR gate’s function, with the 2-inputs experiencing all possible logic levels. An LED (Light-Emitting Diode) provides visual indication of the gate’s output logic level: A condition of any input being raised to a “high” logic level makes the OR gate’s output go “high,” thus energizing the LED for three out of the four input combination states. The NOR Gate As you might have suspected, the NOR gate is an OR gate with its output inverted, just like a NAND gate is an AND gate with an inverted output. NOR gates, like all the other multiple-input gates seen thus far, can be manufactured with more than two inputs. Still, the same logical principle applies: the output goes “low” (0) if any of the inputs are made “high” (1). The output is “high” (1) only when all inputs are “low” (0). The Negative-AND Gate A Negative-AND gate functions the same as an AND gate with all its inputs inverted (connected through NOT gates). In keeping with standard gate symbol convention, these inverted inputs are signified by bubbles. Contrary to most peoples’ first instinct, the logical behavior of a Negative-AND gate is not the same as a NAND gate. Its truth table, actually, is identical to a NOR gate: The Negative-OR Gate Following the same pattern, a Negative-OR gate functions the same as an OR gate with all its inputs inverted. In keeping with standard gate symbol convention, these inverted inputs are signified by bubbles. The behavior and truth table of a Negative-OR gate is the same as for a NAND gate: The Exclusive-OR Gate The last six gate types are all fairly direct variations on three basic functions: AND, OR, and NOT. The Exclusive-OR gate, however, is something quite different. Exclusive-OR gates output a “high” (1) logic level if the inputs are at different logic levels, either 0 and 1 or 1 and 0. Conversely, they output a “low” (0) logic level if the inputs are at the same logic levels. The Exclusive-OR (sometimes called XOR) gate has both a symbol and a truth table pattern that is unique: There are equivalent circuits for an Exclusive-OR gate made up of AND, OR, and NOT gates, just as there were for NAND, NOR, and the negative-input gates. A rather direct approach to simulating an Exclusive-OR gate is to start with a regular OR gate, then add additional gates to inhibit the output from going “high” (1) when both inputs are “high” (1): In this circuit, the final AND gate act as a buffer for the output of the OR gate whenever the NAND gate’s output is high, which it is for the first three input state combinations (00, 01, and 10). However, when both inputs are “high” (1), the NAND gate outputs a “low” (0) logic level, which forces the final AND gate to produce a “low” (0) output. Another equivalent circuit for the Exclusive-OR gate uses a strategy of two AND gates with inverters, set up to generate “high” (1) outputs for input conditions 01 and 10. A final OR gate then allows either of the AND gates’ “high” outputs to create a final “high” output: Exclusive-OR gates are very useful for circuits where two or more binary numbers are to be compared bit-for-bit, and also for error detection (parity check) and code conversion (binary to Grey and vice versa). The Exclusive-NOR Gate Finally, our last gate for analysis is the Exclusive-NOR gate, otherwise known as the XNOR gate. It is equivalent to an Exclusive-OR gate with an inverted output. The truth table for this gate is exactly opposite as for the Exclusive-OR gate: As indicated by the truth table, the purpose of an Exclusive-NOR gate is to output a “high” (1) logic level whenever both inputs are at the same logic levels (either 00 or 11). Review • Rule for an AND gate: output is “high” only if first input and second input are both “high.” • Rule for an OR gate: output is “high” if input A or input B are “high.” • Rule for a NAND gate: output is not “high” if both the first input and the second input are “high.” • Rule for a NOR gate: output is not “high” if either the first input or the second input are “high.” • A Negative-AND gate behaves like a NOR gate. • A Negative-OR gate behaves like a NAND gate. • Rule for an Exclusive-OR gate: output is “high” if the input logic levels are different. • Rule for an Exclusive-NOR gate: output is “high” if the input logic levels are the same.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/03%3A_Logic_Gates/3.04%3A_Multiple-input_Gates.txt
Suppose we altered our basic open-collector inverter circuit, adding a second input terminal just like the first: This schematic illustrates a real circuit, but it isn’t called a “two-input inverter.” Through analysis, we will discover what this Circuit’s logic function is and correspondingly what it should be designated as. Just as in the case of the inverter and buffer, the “steering” diode cluster marked “Q1” is actually formed like a transistor, even though it isn’t used in any amplifying capacity. Unfortunately, a simple NPN transistor structure is inadequate to simulate the three PN junctions necessary in this diode network, so a different transistor (and symbol) is needed. This transistor has one collector, one base, and two emitters, and in the circuit, it looks like this: In the single-input (inverter) circuit, grounding the input resulted in an output that assumed the “high” (1) state. In the case of the open-collector output configuration, this “high” state was simply “floating.” Allowing the input to float (or be connected to Vcc) resulted in the output becoming grounded, which is the “low” or 0 state. Thus, a 1 in resulted in a 0 out, and vice versa. Since this circuit bears so much resemblance to the simple inverter circuit, the only difference being a second input terminal connected in the same way to the base of transistor Q2, we can say that each of the inputs will have the same effect on the output. Namely, if either of the inputs is grounded, transistor Q2 will be forced into a condition of cutoff, thus turning Q3 off and floating the output (output goes “high”). The following series of illustrations shows this for three input states (00, 01, and 10): In any case, where there is a grounded (“low”) input, the output is guaranteed to be floating (“high”). Conversely, the only time the output will ever go “low” is if transistor Q3 turns on, which means transistor Q2must be turned on (saturated), which means neither input can be diverting R1 current away from the base of Q2. The only condition that will satisfy this requirement is when both inputs are “high” (1): NAND Gate Collecting and tabulating these results into a truth table, we see that the pattern matches that of the NAND gate: In the earlier section on NAND gates, this type of gate was created by taking an AND gate and increasing its complexity by adding an inverter (NOT gate) to the output. However, when we examine this circuit, we see that the NAND function is actually the simplest, most natural mode of operation for this TTL design. To create an AND function using TTL circuitry, we need to increase the complexity of this circuit by adding an inverter stage to the output, just like we had to add an additional transistor stage to the TTL inverter circuit to turn it into a buffer: AND Gate The truth table and equivalent gate circuit (an inverted-output NAND gate) are shown here: Of course, both NAND and AND gate circuits may be designed with totem-pole output stages rather than open-collector. I am opting to show the open-collector versions for the sake of simplicity. Review • A TTL NAND gate can be made by taking a TTL inverter circuit and adding another input. • An AND gate may be created by adding an inverter stage to the output of the NAND gate circuit. 3.06: TTL NOR and OR gates Let’s examine the following TTL circuit and analyze its operation: Transistors Q1 and Q2 are both arranged in the same manner that we’ve seen for transistor Q1 in all the other TTL circuits. Rather than functioning as amplifiers, Q1 and Q2 are both being used as two-diode “steering” networks. We may replace Q1 and Q2 with diode sets to help illustrate: If input A is left floating (or connected to Vcc), current will go through the base of transistor Q3, saturating it. If input A is grounded, that current is diverted away from Q3‘s base through the left steering diode of “Q1,” thus forcing Q3 into cutoff. The same can be said for input B and transistor Q4: the logic level of input B determines Q4‘s conduction: either saturated or cutoff. Notice how transistors Q3 and Q4 are paralleled at their collector and emitter terminals. In essence, these two transistors are acting as paralleled switches, allowing current through resistors R3 and R4 according to the logic levels of inputs A and B. If any input is at a “high” (1) level, then at least one of the two transistors (Q3 and/or Q4) will be saturated, allowing current through resistors R3 and R4, and turning on the final output transistor Q5 for a “low” (0) logic level output. The only way the output of this circuit can ever assume a “high” (1) state is if both Q3 and Q4 are cut off, which means both inputs would have to be grounded, or “low” (0). This circuit’s truth table, then, is equivalent to that of the NOR gate: In order to turn this NOR gate circuit into an OR gate, we would have to invert the output logic level with another transistor stage, just like we did with the NAND-to-AND gate example: The truth table and equivalent gate circuit (an inverted-output NOR gate) are shown here: Of course, totem-pole output stages are also possible in both NOR and OR TTL logic circuits. Review • An OR gate may be created by adding an inverter stage to the output of the NOR gate circuit.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/03%3A_Logic_Gates/3.05%3A_TTL_NAND_and_AND_gates.txt
Up until this point, our analysis of transistor logic circuits has been limited to the TTL design paradigm, whereby bipolar transistors are used, and the general strategy of floating inputs being equivalent to “high” (connected to Vcc) inputs—and correspondingly, the allowance of “open-collector” output stages—is maintained. This, however, is not the only way we can build logic gates. Field-Effect Transistors Field-effect transistors, particularly the insulated-gate variety, may be used in the design of gate circuits. Being voltage-controlled rather than current-controlled devices, IGFETs tend to allow very simple circuit designs. Take for instance, the following inverter circuit built using P- and N-channel IGFETs: Notice the “Vdd” label on the positive power supply terminal. This label follows the same convention as “Vcc” in TTL circuits: it stands for the constant voltage applied to the drain of a field effect transistor, in reference to ground. Field Effect Transistors in Gate Circuits Let’s connect this gate circuit to a power source and input switch, and examine its operation. Please note that these IGFET transistors are E-type (Enhancement-mode), and so are normally-off devices. It takes an applied voltage between gate and drain (actually, between gate and substrate) of the correct polarity to bias them on. The upper transistor is a P-channel IGFET. When the channel (substrate) is made more positive than the gate (gate negative in reference to the substrate), the channel is enhanced and current is allowed between source and drain. So, in the above illustration, the top transistor is turned on. The lower transistor, having zero voltage between gate and substrate (source), is in its normal mode: off. Thus, the action of these two transistors are such that the output terminal of the gate circuit has a solid connection to Vdd and a very high resistance connection to ground. This makes the output “high” (1) for the “low” (0) state of the input. Next, we’ll move the input switch to its other position and see what happens: Now the lower transistor (N-channel) is saturated because it has sufficient voltage of the correct polarity applied between gate and substrate (channel) to turn it on (positive on gate, negative on the channel). The upper transistor, having zero voltage applied between its gate and substrate, is in its normal mode: off. Thus, the output of this gate circuit is now “low” (0). Clearly, this circuit exhibits the behavior of an inverter, or NOT gate. Complementary Metal Oxide Semiconductors (CMOS) Using field-effect transistors instead of bipolar transistors has greatly simplified the design of the inverter gate. Note that the output of this gate never floats as is the case with the simplest TTL circuit: it has a natural “totem-pole” configuration, capable of both sourcing and sinking load current. Key to this gate circuit’s elegant design is the complementary use of both P- and N-channel IGFETs. Since IGFETs are more commonly known as MOSFETs (Metal-Oxide-Semiconductor Field Effect Transistor), and this circuit uses both P- and N-channel transistors together, the general classification given to gate circuits like this one is CMOS: Complementary Metal Oxide Semiconductor. CMOS Gates: Challenges and Solutions CMOS circuits aren’t plagued by the inherent nonlinearities of the field-effect transistors, because as digital circuits their transistors always operate in either the saturated or cutoff modes and never in the active mode. Their inputs are, however, sensitive to high voltages generated by electrostatic (static electricity) sources, and may even be activated into “high” (1) or “low” (0) states by spurious voltage sources if left floating. For this reason, it is inadvisable to allow a CMOS logic gate input to float under any circumstances. Please note that this is very different from the behavior of a TTL gate where a floating input was safely interpreted as a “high” (1) logic level. This may cause a problem if the input to a CMOS logic gate is driven by a single-throw switch, where one state has the input solidly connected to either Vdd or ground and the other state has the input floating (not connected to anything): Also, this problem arises if a CMOS gate input is being driven by an open-collector TTL gate. Because such a TTL gate’s output floats when it goes “high” (1), the CMOS gate input will be left in an uncertain state: Fortunately, there is an easy solution to this dilemma, one that is used frequently in CMOS logic circuitry. Whenever a single-throw switch (or any other sort of gate output incapable of both sourcing and sinking current) is being used to drive a CMOS input, a resistor connected to either Vdd or ground may be used to provide a stable logic level for the state in which the driving device’s output is floating. This resistor’s value is not critical: 10 kΩ is usually sufficient. When used to provide a “high” (1) logic level in the event of a floating signal source, this resistor is known as a pullup resistor: When such a resistor is used to provide a “low” (0) logic level in the event of a floating signal source, it is known as a pulldown resistor. Again, the value for a pulldown resistor is not critical: Because open-collector TTL outputs always sink, never source, current, pullup resistors are necessary when interfacing such an output to a CMOS gate input: Although the CMOS gates used in the preceding examples were all inverters (single-input), the same principle of pullup and pulldown resistors applies to multiple-input CMOS gates. Of course, a separate pullup or pulldown resistor will be required for each gate input: This brings us to the next question: how do we design multiple-input CMOS gates such as AND, NAND, OR, and NOR? Not surprisingly, the answer(s) to this question reveal a simplicity of design much like that of the CMOS inverter over its TTL equivalent. CMOS NAND Gates For example, here is the schematic diagram for a CMOS NAND gate: Notice how transistors Q1 and Q3 resemble the series-connected complementary pair from the inverter circuit. Both are controlled by the same input signal (input A), the upper transistor turning off and the lower transistor turning on when the input is “high” (1), and vice versa. Notice also how transistors Q2 and Q4 are similarly controlled by the same input signal (input B), and how they will also exhibit the same on/off behavior for the same input logic levels. The upper transistors of both pairs (Q1 and Q2) have their source and drain terminals paralleled, while the lower transistors (Q3 and Q4) are series-connected. What this means is that the output will go “high” (1) if either top transistor saturates, and will go “low” (0) only if bothlower transistors saturate. The following sequence of illustrations shows the behavior of this NAND gate for all four possibilities of input logic levels (00, 01, 10, and 11): As with the TTL NAND gate, the CMOS NAND gate circuit may be used as the starting point for the creation of an AND gate. All that needs to be added is another stage of transistors to invert the output signal: CMOS NOR Gates A CMOS NOR gate circuit uses four MOSFETs just like the NAND gate, except that its transistors are differently arranged. Instead of two paralleled sourcing (upper) transistors connected to Vdd and two series-connected sinking (lower) transistors connected to ground, the NOR gate uses two series-connected sourcing transistors and two parallel-connected sinking transistors like this: As with the NAND gate, transistors Q1 and Q3 work as a complementary pair, as do transistors Q2 and Q4. Each pair is controlled by a single input signal. If either input A or input B are “high” (1), at least one of the lower transistors (Q3 or Q4) will be saturated, thus making the output “low” (0). Only in the event of bothinputs being “low” (0) will both lower transistors be in cutoff mode and both upper transistors be saturated, the conditions necessary for the output to go “high” (1). This behavior, of course, defines the NOR logic function. CMOS OR Gates The OR function may be built up from the basic NOR gate with the addition of an inverter stage on the output: TTL vs. CMOS: Advantages and Disadvantages Since it appears that any gate possible to construct using TTL technology can be duplicated in CMOS, why do these two “families” of logic design still coexist? The answer is that both TTL and CMOS have their own unique advantages. First and foremost on the list of comparisons between TTL and CMOS is the issue of power consumption. In this measure of performance, CMOS is the unchallenged victor. Because the complementary P- and N-channel MOSFET pairs of a CMOS gate circuit are (ideally) never conducting at the same time, there is little or no current drawn by the circuit from the Vdd power supply except for what current is necessary to source current to a load. TTL, on the other hand, cannot function without some current drawn at all times, due to the biasing requirements of the bipolar transistors from which it is made. There is a caveat to this advantage, though. While the power dissipation of a TTL gate remains rather constant regardless of its operating state(s), a CMOS gate dissipates more power as the frequency of its input signal(s) rises. If a CMOS gate is operated in a static (unchanging) condition, it dissipates zero power (ideally). However, CMOS gate circuits draw transient current during every output state switch from “low” to “high” and vice versa. So, the more often a CMOS gate switches modes, the more often it will draw current from the Vdd supply, hence greater power dissipation at greater frequencies. A CMOS gate also draws much less current from a driving gate output than a TTL gate because MOSFETs are voltage-controlled, not current-controlled, devices. This means that one gate can drive many more CMOS inputs than TTL inputs. The measure of how many gate inputs a single gate output can drive is called fanout. Another advantage that CMOS gate designs enjoy over TTL is a much wider allowable range of power supply voltages. Whereas TTL gates are restricted to power supply (Vcc) voltages between 4.75 and 5.25 volts, CMOS gates are typically able to operate on any voltage between 3 and 15 volts! The reason behind this disparity in power supply voltages is the respective bias requirements of MOSFET versus bipolar junction transistors. MOSFETs are controlled exclusively by gate voltage (with respect to substrate), whereas BJTs are current-controlled devices. TTL gate circuit resistances are precisely calculated for proper bias currents assuming a 5 volt regulated power supply. Any significant variations in that power supply voltage will result in the transistor bias currents being incorrect, which then results in unreliable (unpredictable) operation. The only effect that variations in power supply voltage have on a CMOS gate is the voltage definition of a “high” (1) state. For a CMOS gate operating at 15 volts of power supply voltage (Vdd), an input signal must be close to 15 volts in order to be considered “high” (1). The voltage threshold for a “low” (0) signal remains the same: near 0 volts. One decided disadvantage of CMOS is slow speed, as compared to TTL. The input capacitances of a CMOS gate are much, much greater than that of a comparable TTL gate—owing to the use of MOSFETs rather than BJTs—and so a CMOS gate will be slower to respond to a signal transition (low-to-high or vice versa) than a TTL gate, all other factors being equal. The RC time constant formed by circuit resistances and the input capacitance of the gate tend to impede the fast rise- and fall-times of a digital logic level, thereby degrading high-frequency performance. A strategy for minimizing this inherent disadvantage of CMOS gate circuitry is to “buffer” the output signal with additional transistor stages, to increase the overall voltage gain of the device. This provides a faster-transitioning output voltage (high-to-low or low-to-high) for an input voltage slowly changing from one logic state to another. Consider this example, of an “unbuffered” NOR gate versus a “buffered,” or B-series, NOR gate: In essence, the B-series design enhancement adds two inverters to the output of a simple NOR circuit. This serves no purpose as far as digital logic is concerned, since two cascaded inverters simply cancel: However, adding these inverter stages to the circuit does serve the purpose of increasing overall voltage gain, making the output more sensitive to changes in input state, working to overcome the inherent slowness caused by CMOS gate input capacitance. Review • CMOS logic gates are made of IGFET (MOSFET) transistors rather than bipolar junction transistors. • CMOS gate inputs are sensitive to static electricity. They may be damaged by high voltages, and they may assume any logic level if left floating. • Pullup and pulldown resistors are used to prevent a CMOS gate input from floating if being driven by a signal source capable only of sourcing or sinking current. • CMOS gates dissipate far less power than equivalent TTL gates, but their power dissipation increases with signal frequency, whereas the power dissipation of a TTL gate is approximately constant over a wide range of operating conditions. • CMOS gate inputs draw far less current than TTL inputs, because MOSFETs are voltage-controlled, not current-controlled, devices. • CMOS gates are able to operate on a much wider range of power supply voltages than TTL: typically 3 to 15 volts versus 4.75 to 5.25 volts for TTL. • CMOS gates tend to have a much lower maximum operating frequency than TTL gates due to input capacitances caused by the MOSFET gates. • B-series CMOS gates have “buffered” outputs to increase voltage gain from input to output, resulting in faster output response to input signal changes. This helps overcome the inherent slowness of CMOS gates due to MOSFET input capacitance and the RC time constant thereby engendered.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/03%3A_Logic_Gates/3.07%3A_CMOS_Gate_Circuitry.txt
It is sometimes desirable to have a logic gate that provides both inverted and non-inverted outputs. For example, a single-input gate that is both a buffer and an inverter, with a separate output terminal for each function. Or, a two-input gate that provides both the AND and the NAND functions in a single circuit. Such gates do exist and they are referred to as complementary output gates. The general symbology for such a gate is the basic gate figure with a bar and two output lines protruding from it. An array of complementary gate symbols is shown in the following illustration: Complementary gates are especially useful in “crowded” circuits where there may not be enough physical room to mount the additional integrated circuit chips necessary to provide both inverted and noninverted outputs using standard gates and additional inverters. They are also useful in applications where a complementary output is necessary from a gate, but the addition of an inverter would introduce an unwanted time lag in the inverted output relative to the noninverted output. The internal circuitry of complemented gates is such that both inverted and noninverted outputs change state at almost exactly the same time: Another type of special gate output is called tristate, because it has the ability to provide three different output modes: current sinking (“low” logic level), current sourcing (“high”), and floating (“high-Z,” or high-impedance). Tristate outputs are usually found as an optional feature on buffer gates. Such gates require an extra input terminal to control the “high-Z” mode, and this input is usually called the enable. With the enable input held “high” (1), the buffer acts like an ordinary buffer with a totem pole output stage: it is capable of both sourcing and sinking current. However, the output terminal floats (goes into “high-Z” mode) if ever the enable input is grounded (“low”), regardless of the data signal’s logic level. In other words, making the enable input terminal “low” (0) effectively disconnects the gate from whatever its output is wired to so that it can no longer have any effect. Tristate buffers are marked in schematic diagrams by a triangle character within the gate symbol like this: Tristate buffers are also made with inverted enable inputs. Such a gate acts normal when the enable input is “low” (0) and goes into high-Z output mode when the enable input is “high” (1): One special type of gate known as the bilateral switch uses gate-controlled MOSFET transistors acting as on/off switches to switch electrical signals, analog or digital. The “on” resistance of such a switch is in the range of several hundred ohms, the “off” resistance being in the range of several hundred mega-ohms. Bilateral switches appear in schematics as SPST (Single-Pole, Single-Throw) switches inside of rectangular boxes, with a control terminal on one of the box’s long sides: A bilateral switch might be best envisioned as a solid-state (semiconductor) version of an electromechanical relay: a signal-actuated switch contact that may be used to conduct virtually any type of electric signal. Of course, being solid-state, the bilateral switch has none of the undesirable characteristics of electromechanical relays, such as contact “bouncing,” arcing, slow speed, or susceptibility to mechanical vibration. Conversely, though, they are rather limited in their current-carrying ability. Additionally, the signal conducted by the “contact” must not exceed the power supply “rail” voltages powering the bilateral switch circuit. Four bilateral switches are packaged inside the popular model “4066” integrated circuit: Review • Complementary gates provide both inverted and noninverted output signals, in such a way that neither one is delayed with respect to the other. • Tristate gates provide three different output states: high, low, and floating (High-Z). Such gates are commanded into their high-impedance output modes by a separate input terminal called the enable. • Bilateral switches are MOSFET circuits providing on/off switching for a variety of electrical signal types (analog and digital), controlled by logic level voltage signals. In essence, they are solid-state relays with very low current-handling ability.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/03%3A_Logic_Gates/3.08%3A_Special-output_Gates.txt
NAND and NOR gates possess a special property: they are universal. That is, given enough gates, either type of gate is able to mimic the operation of any other gate type. For example, it is possible to build a circuit exhibiting the OR function using three interconnected NAND gates. The ability for a single gate type to be able to mimic any other gate type is one enjoyed only by the NAND and the NOR. In fact, digital control systems have been designed around nothing but either NAND or NOR gates, all the necessary logic functions being derived from collections of interconnected NANDs or NORs. As proof of this property, this section will be divided into subsections showing how all the basic gate types may be formed using only NANDs or only NORs. Constructing the NOT function As you can see, there are two ways to use a NAND gate as an inverter, and two ways to use a NOR gate as an inverter. Either method works, although connecting TTL inputs together increases the amount of current loading to the driving gate. For CMOS gates, common input terminals decreases the switching speed of the gate due to increased input capacitance. Inverters are the fundamental tool for transforming one type of logic function into another, and so there will be many inverters shown in the illustrations to follow. In those diagrams, I will only show one method of inversion, and that will be where the unused NAND gate input is connected to +V (either Vcc or Vdd, depending on whether the circuit is TTL or CMOS) and where the unused input for the NOR gate is connected to ground. Bear in mind that the other inversion method (connecting both NAND or NOR inputs together) works just as well from a logical (1’s and 0’s) point of view, but is undesirable from the practical perspectives of increased current loading for TTL and increased input capacitance for CMOS. Constructing the “buffer” function Being that it is quite easy to employ NAND and NOR gates to perform the inverter (NOT) function, it stands to reason that two such stages of gates will result in a buffer function, where the output is the same logical state as the input. Constructing the AND function To make the AND function from NAND gates, all that is needed is an inverter (NOT) stage on the output of a NAND gate. This extra inversion “cancels out” the first N in NAND, leaving the AND function. It takes a little more work to wrestle the same functionality out of NOR gates, but it can be done by inverting (“NOT”) all of the inputs to a NOR gate. Constructing the NAND function It would be pointless to show you how to “construct” the NAND function using a NAND gate, since there is nothing to do. To make a NOR gate perform the NAND function, we must invert all inputs to the NOR gate as well as the NOR gate’s output. For a two-input gate, this requires three more NOR gates connected as inverters. Constructing the OR function Inverting the output of a NOR gate (with another NOR gate connected as an inverter) results in the OR function. The NAND gate, on the other hand, requires inversion of all inputs to mimic the OR function, just as we needed to invert all inputs of a NOR gate to obtain the AND function. Remember that inversion of all inputs to a gate results in changing that gate’s essential function from AND to OR (or vice versa), plus an inverted output. Thus, with all inputs inverted, a NAND behaves as an OR, a NOR behaves as an AND, an AND behaves as a NOR, and an OR behaves as a NAND. In Boolean algebra, this transformation is referred to as DeMorgan’s Theorem, covered in more detail in a later chapter of this book. Constructing the NOR function Much the same as the procedure for making a NOR gate behave as a NAND, we must invert all inputs and the output to make a NAND gate function as a NOR. Review • NAND and NOR gates are universal: that is, they have the ability to mimic any type of gate, if interconnected in sufficient numbers.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/03%3A_Logic_Gates/3.09%3A_Gate_Universality.txt
Logic gate circuits are designed to input and output only two types of signals: “high” (1) and “low” (0), as represented by a variable voltage: full power supply voltage for a “high” state and zero voltage for a “low” state. In a perfect world, all logic circuit signals would exist at these extreme voltage limits, and never deviate from them (i.e., less than full voltage for a “high,” or more than zero voltage for a “low”). However, in reality, logic signal voltage levels rarely attain these perfect limits due to stray voltage drops in the transistor circuitry, and so we must understand the signal level limitations of gate circuits as they try to interpret signal voltages lying somewhere between full supply voltage and zero. TTL gates operate on a nominal power supply voltage of 5 volts, +/- 0.25 volts. Ideally, a TTL “high” signal would be 5.00 volts exactly, and a TTL “low” signal 0.00 volts exactly. However, real TTL gate circuits cannot output such perfect voltage levels, and are designed to accept “high” and “low” signals deviating substantially from these ideal values. “Acceptable” input signal voltages range from 0 volts to 0.8 volts for a “low” logic state, and 2 volts to 5 volts for a “high” logic state. “Acceptable” output signal voltages (voltage levels guaranteed by the gate manufacturer over a specified range of load conditions) range from 0 volts to 0.5 volts for a “low” logic state, and 2.7 volts to 5 volts for a “high” logic state: If a voltage signal ranging between 0.8 volts and 2 volts were to be sent into the input of a TTL gate, there would be no certain response from the gate. Such a signal would be considered uncertain, and no logic gate manufacturer would guarantee how their gate circuit would interpret such a signal. As you can see, the tolerable ranges for output signal levels are narrower than for input signal levels, to ensure that any TTL gate outputting a digital signal into the input of another TTL gate will transmit voltages acceptable to the receiving gate. The difference between the tolerable output and input ranges is called the noise margin of the gate. For TTL gates, the low-level noise margin is the difference between 0.8 volts and 0.5 volts (0.3 volts), while the high-level noise margin is the difference between 2.7 volts and 2 volts (0.7 volts). Simply put, the noise margin is the peak amount of spurious or “noise” voltage that may be superimposed on a weak gate output voltage signal before the receiving gate might interpret it wrongly: CMOS gate circuits have input and output signal specifications that are quite different from TTL. For a CMOS gate operating at a power supply voltage of 5 volts, the acceptable input signal voltages range from 0 volts to 1.5 volts for a “low” logic state, and 3.5 volts to 5 volts for a “high” logic state. “Acceptable” output signal voltages (voltage levels guaranteed by the gate manufacturer over a specified range of load conditions) range from 0 volts to 0.05 volts for a “low” logic state, and 4.95 volts to 5 volts for a “high” logic state: It should be obvious from these figures that CMOS gate circuits have far greater noise margins than TTL: 1.45 volts for CMOS low-level and high-level margins, versus a maximum of 0.7 volts for TTL. In other words, CMOS circuits can tolerate over twice the amount of superimposed “noise” voltage on their input lines before signal interpretation errors will result. CMOS noise margins widen even further with higher operating voltages. Unlike TTL, which is restricted to a power supply voltage of 5 volts, CMOS may be powered by voltages as high as 15 volts (some CMOS circuits as high as 18 volts). Shown here are the acceptable “high” and “low” states, for both input and output, of CMOS integrated circuits operating at 10 volts and 15 volts, respectively: The margins for acceptable “high” and “low” signals may be greater than what is shown in the previous illustrations. What is shown represents “worst-case” input signal performance, based on manufacturer’s specifications. In practice, it may be found that a gate circuit will tolerate “high” signals of considerably less voltage and “low” signals of considerably greater voltage than those specified here. Conversely, the extremely small output margins shown—guaranteeing output states for “high” and “low” signals to within 0.05 volts of the power supply “rails”—are optimistic. Such “solid” output voltage levels will be true only for conditions of minimum loading. If the gate is sourcing or sinking substantial current to a load, the output voltage will not be able to maintain these optimum levels, due to internal channel resistance of the gate’s final output MOSFETs. Within the “uncertain” range for any gate input, there will be some point of demarcation dividing the gate’s actual “low” input signal range from its actual “high” input signal range. That is, somewhere between the lowest “high” signal voltage level and the highest “low” signal voltage level guaranteed by the gate manufacturer, there is a threshold voltage at which the gate will actually switch its interpretation of a signal from “low” or “high” or vice versa. For most gate circuits, this unspecified voltage is a single point: In the presence of AC “noise” voltage superimposed on the DC input signal, a single threshold point at which the gate alters its interpretation of logic level will result in an erratic output: If this scenario looks familiar to you, its because you remember a similar problem with (analog) voltage comparator op-amp circuits. With a single threshold point at which an input causes the output to switch between “high” and “low” states, the presence of significant noise will cause erratic changes in the output: The solution to this problem is a bit of positive feedback introduced into the amplifier circuit. With an op-amp, this is done by connecting the output back around to the noninverting (+) input through a resistor. In a gate circuit, this entails redesigning the internal gate circuitry, establishing the feedback inside the gate package rather than through external connections. A gate so designed is called a Schmitt trigger. Schmitt triggers interpret varying input voltages according to two threshold voltages: a positive-going threshold (VT+), and a negative-going threshold (VT-): Schmitt trigger gates are distinguished in schematic diagrams by the small “hysteresis” symbol drawn within them, reminiscent of the B-H curve for a ferromagnetic material. Hysteresis engendered by positive feedback within the gate circuitry adds an additional level of noise immunity to the gate’s performance. Schmitt trigger gates are frequently used in applications where noise is expected on the input signal line(s), and/or where an erratic output would be very detrimental to system performance. The differing voltage level requirements of TTL and CMOS technology present problems when the two types of gates are used in the same system. Although operating CMOS gates on the same 5.00 volt power supply voltage required by the TTL gates is no problem, TTL output voltage levels will not be compatible with CMOS input voltage requirements. Take for instance a TTL NAND gate outputting a signal into the input of a CMOS inverter gate. Both gates are powered by the same 5.00 volt supply (Vcc). If the TTL gate outputs a “low” signal (guaranteed to be between 0 volts and 0.5 volts), it will be properly interpreted by the CMOS gate’s input as a “low” (expecting a voltage between 0 volts and 1.5 volts): However, if the TTL gate outputs a “high” signal (guaranteed to be between 5 volts and 2.7 volts), it might not be properly interpreted by the CMOS gate’s input as a “high” (expecting a voltage between 5 volts and 3.5 volts): Given this mismatch, it is entirely possible for the TTL gate to output a valid “high” signal (valid, that is, according to the standards for TTL) that lies within the “uncertain” range for the CMOS input, and may be (falsely) interpreted as a “low” by the receiving gate. An easy “fix” for this problem is to augment the TTL gate’s “high” signal voltage level by means of a pullup resistor: Something more than this, though, is required to interface a TTL output with a CMOS input, if the receiving CMOS gate is powered by a greater power supply voltage: There will be no problem with the CMOS gate interpreting the TTL gate’s “low” output, of course, but a “high” signal from the TTL gate is another matter entirely. The guaranteed output voltage range of 2.7 volts to 5 volts from the TTL gate output is nowhere near the CMOS gate’s acceptable range of 7 volts to 10 volts for a “high” signal. If we use an open-collector TTL gate instead of a totem-pole output gate, though, a pullup resistor to the 10 volt Vdd supply rail will raise the TTL gate’s “high” output voltage to the full power supply voltage supplying the CMOS gate. Since an open-collector gate can only sink current, not source current, the “high” state voltage level is entirely determined by the power supply to which the pullup resistor is attached, thus neatly solving the mismatch problem: Due to the excellent output voltage characteristics of CMOS gates, there is typically no problem connecting a CMOS output to a TTL input. The only significant issue is the current loading presented by the TTL inputs, since the CMOS output must sink current for each of the TTL inputs while in the “low” state. When the CMOS gate in question is powered by a voltage source in excess of 5 volts (Vcc), though, a problem will result. The “high” output state of the CMOS gate, being greater than 5 volts, will exceed the TTL gate’s acceptable input limits for a “high” signal. A solution to this problem is to create an “open-collector” inverter circuit using a discrete NPN transistor, and use it to interface the two gates together: The “Rpullup” resistor is optional, since TTL inputs automatically assume a “high” state when left floating, which is what will happen when the CMOS gate output is “low” and the transistor cuts off. Of course, one very important consequence of implementing this solution is the logical inversion created by the transistor: when the CMOS gate outputs a “low” signal, the TTL gate sees a “high” input; and when the CMOS gate outputs a “high” signal, the transistor saturates and the TTL gate sees a “low” input. So long as this inversion is accounted for in the logical scheme of the system, all will be well. 3.11: DIP Gate Packaging Digital logic gate circuits are manufactured as integrated circuits: all the constituent transistors and resistors built on a single piece of semiconductor material. The engineer, technician, or hobbyist using small numbers of gates will likely find what he or she needs enclosed in a DIP (Dual Inline Package) housing. DIP-enclosed integrated circuits are available with even numbers of pins, located at 0.100 inch intervals from each other for standard circuit board layout compatibility. Pin counts of 8, 14, 16, 18, and 24 are common for DIP “chips.” Part numbers given to these DIP packages specify what type of gates are enclosed, and how many. These part numbers are industry standards, meaning that a “74LS02” manufactured by Motorola will be identical in function to a “74LS02” manufactured by Fairchild or by any other manufacturer. Letter codes prepended to the part number are unique to the manufacturer, and are not industry-standard codes. For instance, a SN74LS02 is a quad 2-input TTL NOR gate manufactured by Motorola, while a DM74LS02 is the exact same circuit manufactured by Fairchild. Logic circuit part numbers beginning with “74” are commercial-grade TTL. If the part number begins with the number “54”, the chip is a military-grade unit: having a greater operating temperature range, and typically more robust in regard to allowable power supply and signal voltage levels. The letters “LS” immediately following the 74/54 prefix indicate “Low-power Schottky” circuitry, using Schottky-barrier diodes and transistors throughout, to decrease power dissipation. Non-Schottky gate circuits consume more power, but are able to operate at higher frequencies due to their faster switching times. A few of the more common TTL “DIP” circuit packages are shown here for reference:
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/03%3A_Logic_Gates/3.10%3A_Logic_Signal_Voltage_Levels.txt
Though it may seem strange to cover the elementary topic of electrical switches at such a late stage in this book series, I do so because the chapters that follow explore an older realm of digital technology based on mechanical switch contacts rather than solid-state gate circuits, and a thorough understanding of switch types is necessary for the undertaking. Learning the function of switch-based circuits at the same time that you learn about solid-state logic gates makes both topics easier to grasp, and sets the stage for an enhanced learning experience in Boolean algebra, the mathematics behind digital logic circuits. What is an Electrical Switch? An electrical switch is any device used to interrupt the flow of electrons in a circuit. Switches are essentially binary devices: they are either completely on (“closed”) or completely off (“open”). There are many different types of switches, and we will explore some of these types in this chapter. Learn the Different Types of Switches The simplest type of switch is one where two electrical conductors are brought in contact with each other by the motion of an actuating mechanism. Other switches are more complex, containing electronic circuits able to turn on or off depending on some physical stimulus (such as light or magnetic field) sensed. In any case, the final output of any switch will be (at least) a pair of wire-connection terminals that will either be connected together by the switch’s internal contact mechanism (“closed”), or not connected together (“open”). Any switch designed to be operated by a person is generally called a hand switch, and they are manufactured in several varieties: Toggle Switches Toggle switches are actuated by a lever angled in one of two or more positions. The common light switch used in household wiring is an example of a toggle switch. Most toggle switches will come to rest in any of their lever positions, while others have an internal spring mechanism returning the lever to a certain normalposition, allowing for what is called “momentary” operation. Pushbutton Switches Pushbutton switches are two-position devices actuated with a button that is pressed and released. Most pushbutton switches have an internal spring mechanism returning the button to its “out,” or “unpressed,” position, for momentary operation. Some pushbutton switches will latch alternately on or off with every push of the button. Other pushbutton switches will stay in their “in,” or “pressed,” position until the button is pulled back out. This last type of pushbutton switches usually have a mushroom-shaped button for easy push-pull action. Selector Switches Selector switches are actuated with a rotary knob or lever of some sort to select one of two or more positions. Like the toggle switch, selector switches can either rest in any of their positions or contain spring-return mechanisms for momentary operation. Joystick Switches A joystick switch is actuated by a lever free to move in more than one axis of motion. One or more of several switch contact mechanisms are actuated depending on which way the lever is pushed, and sometimes by how far it is pushed. The circle-and-dot notation on the switch symbol represents the direction of joystick lever motion required to actuate the contact. Joystick hand switches are commonly used for crane and robot control. Some switches are specifically designed to be operated by the motion of a machine rather than by the hand of a human operator. These motion-operated switches are commonly called limit switches, because they are often used to limit the motion of a machine by turning off the actuating power to a component if it moves too far. As with hand switches, limit switches come in several varieties: Limit Switches These limit switches closely resemble rugged toggle or selector hand switches fitted with a lever pushed by the machine part. Often, the levers are tipped with a small roller bearing, preventing the lever from being worn off by repeated contact with the machine part. Proximity Switches Proximity switches sense the approach of a metallic machine part either by a magnetic or high-frequency electromagnetic field. Simple proximity switches use a permanent magnet to actuate a sealed switch mechanism whenever the machine part gets close (typically 1 inch or less). More complex proximity switches work like a metal detector, energizing a coil of wire with a high-frequency current, and electronically monitoring the magnitude of that current. If a metallic part (not necessarily magnetic) gets close enough to the coil, the current will increase, and trip the monitoring circuit. The symbol shown here for the proximity switch is of the electronic variety, as indicated by the diamond-shaped box surrounding the switch. A non-electronic proximity switch would use the same symbol as the lever-actuated limit switch. Another form of proximity switch is the optical switch, comprised of a light source and photocell. Machine position is detected by either the interruption or reflection of a light beam. Optical switches are also useful in safety applications, where beams of light can be used to detect personnel entry into a dangerous area. The Different Types of Process Switches In many industrial processes, it is necessary to monitor various physical quantities with switches. Such switches can be used to sound alarms, indicating that a process variable has exceeded normal parameters, or they can be used to shut down processes or equipment if those variables have reached dangerous or destructive levels. There are many different types of process switches. Speed Switches These switches sense the rotary speed of a shaft either by a centrifugal weight mechanism mounted on the shaft, or by some kind of non-contact detection of shaft motion such as optical or magnetic. Pressure Switches Gas or liquid pressure can be used to actuate a switch mechanism if that pressure is applied to a piston, diaphragm, or bellows, which converts pressure to mechanical force. Temperature Switches An inexpensive temperature-sensing mechanism is the “bimetallic strip:” a thin strip of two metals, joined back-to-back, each metal having a different rate of thermal expansion. When the strip heats or cools, differing rates of thermal expansion between the two metals causes it to bend. The bending of the strip can then be used to actuate a switch contact mechanism. Other temperature switches use a brass bulb filled with either a liquid or gas, with a tiny tube connecting the bulb to a pressure-sensing switch. As the bulb is heated, the gas or liquid expands, generating a pressure increase which then actuates the switch mechanism. Liquid Level Switch A floating object can be used to actuate a switch mechanism when the liquid level in an tank rises past a certain point. If the liquid is electrically conductive, the liquid itself can be used as a conductor to bridge between two metal probes inserted into the tank at the required depth. The conductivity technique is usually implemented with a special design of relay triggered by a small amount of current through the conductive liquid. In most cases it is impractical and dangerous to switch the full load current of the circuit through a liquid. Level switches can also be designed to detect the level of solid materials such as wood chips, grain, coal, or animal feed in a storage silo, bin, or hopper. A common design for this application is a small paddle wheel, inserted into the bin at the desired height, which is slowly turned by a small electric motor. When the solid material fills the bin to that height, the material prevents the paddle wheel from turning. The torque response of the small motor than trips the switch mechanism. Another design uses a “tuning fork” shaped metal prong, inserted into the bin from the outside at the desired height. The fork is vibrated at its resonant frequency by an electronic circuit and magnet/electromagnet coil assembly. When the bin fills to that height, the solid material dampens the vibration of the fork, the change in vibration amplitude and/or frequency detected by the electronic circuit. Liquid Flow Switch Inserted into a pipe, a flow switch will detect any gas or liquid flow rate in excess of a certain threshold, usually with a small paddle or vane which is pushed by the flow. Other flow switches are constructed as differential pressure switches, measuring the pressure drop across a restriction built into the pipe. Nuclear Level Switch Another type of level switch, suitable for liquid or solid material detection, is the nuclear switch. Composed of a radioactive source material and a radiation detector, the two are mounted across the diameter of a storage vessel for either solid or liquid material. Any height of material beyond the level of the source/detector arrangement will attenuate the strength of radiation reaching the detector. This decrease in radiation at the detector can be used to trigger a relay mechanism to provide a switch contact for measurement, alarm point, or even control of the vessel level. Bource and detector are outside of the vessel, with no intrusion at all except the radiation flux itself. The radioactive sources used are fairly weak and pose no immediate health threat to operations or maintenance personnel. All Switches Have Multiple Applications As usual, there is more than one way to implement a switch to monitor a physical process or serve as an operator control. There is usually no single “perfect” switch for any application, although some obviously exhibit certain advantages over others. Switches must be intelligently matched to the task for efficient and reliable operation. Review • A switch is an electrical device, usually electromechanical, used to control continuity between two points. • Hand switches are actuated by human touch. • Limit switches are actuated by machine motion. • Process switches are actuated by changes in some physical process (temperature, level, flow, etc.).
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/04%3A_Switches/4.01%3A_Switch_Types.txt
A switch can be constructed with any mechanism bringing two conductors into contact with each other in a controlled manner. This can be as simple as allowing two copper wires to touch each other by the motion of a lever, or by directly pushing two metal strips into contact. However, a good switch design must be rugged and reliable, and avoid presenting the operator with the possibility of electric shock. Therefore, industrial switch designs are rarely this crude. The conductive parts in a switch used to make and break the electrical connection are called contacts. Contacts are typically made of silver or silver-cadmium alloy, whose conductive properties are not significantly compromised by surface corrosion or oxidation. Gold contacts exhibit the best corrosion resistance, but are limited in current-carrying capacity and may “cold weld” if brought together with high mechanical force. Whatever the choice of metal, the switch contacts are guided by a mechanism ensuring square and even contact, for maximum reliability and minimum resistance. Contacts such as these can be constructed to handle extremely large amounts of electric current, up to thousands of amps in some cases. The limiting factors for switch contact ampacity are as follows: • Heat generated by current through metal contacts (while closed). • Sparking caused when contacts are opened or closed. • The voltage across open switch contacts (potential of current jumping across the gap). One major disadvantage of standard switch contacts is the exposure of the contacts to the surrounding atmosphere. In a nice, clean, control-room environment, this is generally not a problem. However, most industrial environments are not this benign. The presence of corrosive chemicals in the air can cause contacts to deteriorate and fail prematurely. Even more troublesome is the possibility of regular contact sparking causing flammable or explosive chemicals to ignite. When such environmental concerns exist, other types of contacts can be considered for small switches. These other types of contacts are sealed from contact with the outside air, and therefore do not suffer the same exposure problems that standard contacts do. A common type of sealed-contact switch is the mercury switch. Mercury is a metallic element, liquid at room temperature. Being a metal, it possesses excellent conductive properties. Being a liquid, it can be brought into contact with metal probes (to close a circuit) inside of a sealed chamber simply by tilting the chamber so that the probes are on the bottom. Many industrial switches use small glass tubes containing mercury which are tilted one way to close the contact, and tilted another way to open. Aside from the problems of tube breakage and spilling mercury (which is a toxic material), and susceptibility to vibration, these devices are an excellent alternative to open-air switch contacts wherever environmental exposure problems are a concern. Here, a mercury switch (often called a tilt switch) is shown in the open position, where the mercury is out of contact with the two metal contacts at the other end of the glass bulb: Here, the same switch is shown in the closed position. Gravity now holds the liquid mercury in contact with the two metal contacts, providing electrical continuity from one to the other: Mercury switch contacts are impractical to build in large sizes, and so you will typically find such contacts rated at no more than a few amps, and no more than 120 volts. There are exceptions, of course, but these are common limits. Another sealed-contact type of switch is the magnetic reed switch. Like the mercury switch, a reed switch’s contacts are located inside a sealed tube. Unlike the mercury switch which uses liquid metal as the contact medium, the reed switch is simply a pair of very thin, magnetic, metal strips (hence the name “reed”) which are brought into contact with each other by applying a strong magnetic field outside the sealed tube. The source of the magnetic field in this type of switch is usually a permanent magnet, moved closer to or further away from the tube by the actuating mechanism. Due to the small size of the reeds, this type of contact is typically rated at lower currents and voltages than the average mercury switch. However, reed switches typically handle vibration better than mercury contacts, because there is no liquid inside the tube to splash around. It is common to find general-purpose switch contact voltage and current ratings to be greater on any given switch or relay if the electric power being switched is AC instead of DC. The reason for this is the self-extinguishing tendency of an alternating-current arc across an air gap. Because 60 Hz power line current actually stops and reverses direction 120 times per second, there are many opportunities for the ionized air of an arc to lose enough temperature to stop conducting current, to the point where the arc will not re-start on the next voltage peak. DC, on the other hand, is a continuous, uninterrupted flow of electrons which tends to maintain an arc across an air gap much better. Therefore, switch contacts of any kind incur more wear when switching a given value of direct current than for the same value of alternating current. The problem of switching DC is exaggerated when the load has a significant amount of inductance, as there will be very high voltages generated across the switch’s contacts when the circuit is opened (the inductor doing its best to maintain circuit current at the same magnitude as when the switch was closed). With both AC and DC, contact arcing can be minimized with the addition of a “snubber” circuit (a capacitor and resistor wired in series) in parallel with the contact, like this: A sudden rise in voltage across the switch contact caused by the contact opening will be tempered by the capacitor’s charging action (the capacitor opposing the increase in voltage by drawing current). The resistor limits the amount of current that the capacitor will discharge through the contact when it closes again. If the resistor were not there, the capacitor might actually make the arcing during contact closure worse than the arcing during contact opening without a capacitor! While this addition to the circuit helps mitigate contact arcing, it is not without disadvantage: a prime consideration is the possibility of a failed (shorted) capacitor/resistor combination providing a path for electrons to flow through the circuit at all times, even when the contact is open and current is not desired. The risk of this failure, and the severity of the resulting consequences must be considered against the increased contact wear (and inevitable contact failure) without the snubber circuit. The use of snubbers in DC switch circuits is nothing new: automobile manufacturers have been doing this for years on engine ignition systems, minimizing the arcing across the switch contact “points” in the distributor with a small capacitor called a condenser. As any mechanic can tell you, the service life of the distributor’s “points” is directly related to how well the condenser is functioning. With all this discussion concerning the reduction of switch contact arcing, one might be led to think that less current is always better for a mechanical switch. This, however, is not necessarily so. It has been found that a small amount of periodic arcing can actually be good for the switch contacts, because it keeps the contact faces free from small amounts of dirt and corrosion. If a mechanical switch contact is operated with too little current, the contacts will tend to accumulate excessive resistance and may fail prematurely! This minimum amount of electric current necessary to keep a mechanical switch contact in good health is called the wetting current. Normally, a switch’s wetting current rating is far below its maximum current rating, and well below its normal operating current load in a properly designed system. However, there are applications where a mechanical switch contact may be required to routinely handle currents below normal wetting current limits (for instance, if a mechanical selector switch needs to open or close a digital logic or analog electronic circuit where the current value is extremely small). In these applications, is it highly recommended that gold-plated switch contacts be specified. Gold is a “noble” metal and does not corrode as other metals will. Such contacts have extremely low wetting current requirements as a result. Normal silver or copper alloy contacts will not provide reliable operation if used in such low-current service! Review • The parts of a switch responsible for making and breaking electrical continuity are called the “contacts.” Usually made of corrosion-resistant metal alloy, contacts are made to touch each other by a mechanism which helps maintain proper alignment and spacing. • Mercury switches use a slug of liquid mercury metal as a moving contact. Sealed in a glass tube, the mercury contact’s spark is sealed from the outside environment, making this type of switch ideally suited for atmospheres potentially harboring explosive vapors. • Reed switches are another type of sealed-contact device, contact being made by two thin metal “reeds” inside a glass tube, brought together by the influence of an external magnetic field. • Switch contacts suffer greater duress switching DC than AC. This is primarily due to the self-extinguishing nature of an AC arc. • A resistor-capacitor network called a “snubber” can be connected in parallel with a switch contact to reduce contact arcing. • Wetting current is the minimum amount of electric current necessary for a switch contact to carry in order for it to be self-cleaning. Normally this value is far below the switch’s maximum current rating.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/04%3A_Switches/4.02%3A_Switch_Contact_Design.txt
This page was auto-generated because a user created a sub-page to this page. 4.03: Contact “Normal” State and Make This page was auto-generated because a user created a sub-page to this page. 4.3.01: Contact “Normal” State and Make Any kind of switch contact can be designed so that the contacts “close” (establish continuity) when actuated, or “open” (interrupt continuity) when actuated. For switches that have a spring-return mechanism in them, the direction that the spring returns it to with no applied force is called the normal position. Therefore, contacts that are open in this position are called normally open and contacts that are closed in this position are called normally closed. For process switches, the normal position, or state, is that which the switch is in when there is no process influence on it. An easy way to figure out the normal condition of a process switch is to consider the state of the switch as it sits on a storage shelf, uninstalled. Here are some examples of “normal” process switch conditions: • Speed switch: Shaft not turning • Pressure switch: Zero applied pressure • Temperature switch: Ambient (room) temperature • Level switch: Empty tank or bin • Flow switch: Zero liquid flow It is important to differentiate between a switch’s “normal” condition and its “normal” use in an operating process. Consider the example of a liquid flow switch that serves as a low-flow alarm in a cooling water system. The normal, or properly-operating, condition of the cooling water system is to have fairly constant coolant flow going through this pipe. If we want the flow switch’s contact to close in the event of a loss of coolant flow (to complete an electric circuit which activates an alarm siren, for example), we would want to use a flow switch with normally-closed rather than normally-open contacts. When there’s adequate flow through the pipe, the switch’s contacts are forced open; when the flow rate drops to an abnormally low level, the contacts return to their normal (closed) state. This is confusing if you think of “normal” as being the regular state of the process, so be sure to always think of a switch’s “normal” state as that which its in as it sits on a shelf. The schematic symbology for switches vary according to the switch’s purpose and actuation. A normally-open switch contact is drawn in such a way as to signify an open connection, ready to close when actuated. Conversely, a normally-closed switch is drawn as a closed connection which will be opened when actuated. Note the following symbols: There is also a generic symbology for any switch contact, using a pair of vertical lines to represent the contact points in a switch. Normally-open contacts are designated by the lines not touching, while normally-closed contacts are designated with a diagonal line bridging between the two lines. Compare the two: The switch on the left will close when actuated, and will be open while in the “normal” (unactuated) position. The switch on the right will open when actuated, and is closed in the “normal” (unactuated) position. If switches are designated with these generic symbols, the type of switch usually will be noted in text immediately beside the symbol. Please note that the symbol on the left is not to be confused with that of a capacitor. If a capacitor needs to be represented in a control logic schematic, it will be shown like this: In standard electronic symbology, the figure shown above is reserved for polarity-sensitive capacitors. In control logic symbology, this capacitor symbol is used for any type of capacitor, even when the capacitor is not polarity sensitive, so as to clearly distinguish it from a normally-open switch contact. With multiple-position selector switches, another design factor must be considered: that is, the sequence of breaking old connections and making new connections as the switch is moved from position to position, the moving contact touching several stationary contacts in sequence. The selector switch shown above switches a common contact lever to one of five different positions, to contact wires numbered 1 through 5. The most common configuration of a multi-position switch like this is one where the contact with one position is broken before the contact with the next position is made. This configuration is called break-before-make. To give an example, if the switch were set at position number 3 and slowly turned clockwise, the contact lever would move off of the number 3 position, opening that circuit, move to a position between number 3 and number 4 (both circuit paths open), and then touch position number 4, closing that circuit. There are applications where it is unacceptable to completely open the circuit attached to the “common” wire at any point in time. For such an application, a make-before-break switch design can be built, in which the movable contact lever actually bridges between two positions of contact (between number 3 and number 4, in the above scenario) as it travels between positions. The compromise here is that the circuit must be able to tolerate switch closures between adjacent position contacts (1 and 2, 2 and 3, 3 and 4, 4 and 5) as the selector knob is turned from position to position. Such a switch is shown here: When movable contact(s) can be brought into one of several positions with stationary contacts, those positions are sometimes called throws. The number of movable contacts is sometimes called poles. Both selector switches shown above with one moving contact and five stationary contacts would be designated as “single-pole, five-throw” switches. If two identical single-pole, five-throw switches were mechanically ganged together so that they were actuated by the same mechanism, the whole assembly would be called a “double-pole, five-throw” switch: Here are a few common switch configurations and their abbreviated designations: Review • The normal state of a switch is that where it is unactuated. For process switches, this is the condition its in when sitting on a shelf, uninstalled. • A switch that is open when unactuated is called normally-open. A switch that is closed when unactuated is called normally-closed. Sometimes the terms “normally-open” and “normally-closed” are abbreviated N.O. and N.C., respectively. • The generic symbology for N.O. and N.C. switch contacts is as follows: • Multiposition switches can be either break-before-make (most common) or make-before-break. • The “poles” of a switch refers to the number of moving contacts, while the “throws” of a switch refers to the number of stationary contacts per moving contact.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/04%3A_Switches/4.03%3A_Contact_%E2%80%9CNormal%E2%80%9D_State_and_Make/Break.txt
When a switch is actuated and contacts touch one another under the force of actuation, they are supposed to establish continuity in a single, crisp moment. Unfortunately, though, switches do not exactly achieve this goal. Due to the mass of the moving contact and any elasticity inherent in the mechanism and/or contact materials, contacts will “bounce” upon closure for a period of milliseconds before coming to a full rest and providing unbroken contact. In many applications, switch bounce is of no consequence: it matters little if a switch controlling an incandescent lamp “bounces” for a few cycles every time it is actuated. Since the lamp’s warm-up time greatly exceeds the bounce period, no irregularity in lamp operation will result. However, if the switch is used to send a signal to an electronic amplifier or some other circuit with a fast response time, contact bounce may produce very noticeable and undesired effects: A closer look at the oscilloscope display reveals a rather ugly set of makes and breaks when the switch is actuated a single time: If, for example, this switch is used to provide a “clock” signal to a digital counter circuit, so that each actuation of the pushbutton switch is supposed to increment the counter by a value of 1, what will happen instead is the counter will increment by several counts each time the switch is actuated. Since mechanical switches often interface with digital electronic circuits in modern systems, switch contact bounce is a frequent design consideration. Somehow, the “chattering” produced by bouncing contacts must be eliminated so that the receiving circuit sees a clean, crisp off/on transition: Switch contacts may be debounced several different ways. The most direct means is to address the problem at its source: the switch itself. Here are some suggestions for designing switch mechanisms for minimum bounce: • Reduce the kinetic energy of the moving contact. This will reduce the force of impact as it comes to rest on the stationary contact, thus minimizing bounce. • Use “buffer springs” on the stationary contact(s) so that they are free to recoil and gently absorb the force of impact from the moving contact. • Design the switch for “wiping” or “sliding” contact rather than direct impact. “Knife” switch designs use sliding contacts. • Dampen the switch mechanism’s movement using an air or oil “shock absorber” mechanism. • Use sets of contacts in parallel with each other, each slightly different in mass or contact gap, so that when one is rebounding off the stationary contact, at least one of the others will still be in firm contact. • “Wet” the contacts with liquid mercury in a sealed environment. After initial contact is made, the surface tension of the mercury will maintain circuit continuity even though the moving contact may bounce off the stationary contact several times. Each one of these suggestions sacrifices some aspect of switch performance for limited bounce, and so it is impractical to design all switches with limited contact bounce in mind. Alterations made to reduce the kinetic energy of the contact may result in a small open-contact gap or a slow-moving contact, which limits the amount of voltage the switch may handle and the amount of current it may interrupt. Sliding contacts, while non-bouncing, still produce “noise” (irregular current caused by irregular contact resistance when moving), and suffer from more mechanical wear than normal contacts. Multiple, parallel contacts give less bounce, but only at greater switch complexity and cost. Using mercury to “wet” the contacts is a very effective means of bounce mitigation, but it is unfortunately limited to switch contacts of low ampacity. Also, mercury-wetted contacts are usually limited in mounting position, as gravity may cause the contacts to “bridge” accidently if oriented the wrong way. If re-designing the switch mechanism is not an option, mechanical switch contacts may be debounced externally, using other circuit components to condition the signal. A low-pass filter circuit attached to the output of the switch, for example, will reduce the voltage/current fluctuations generated by contact bounce: Switch contacts may be debounced electronically, using hysteretic transistor circuits (circuits that “latch” in either a high or a low state) with built-in time delays (called “one-shot” circuits), or two inputs controlled by a double-throw switch. These hysteretic circuits, called multivibrators, are discussed in detail in a later chapter.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/04%3A_Switches/4.04%3A_Contact_%E2%80%9CBounce%E2%80%9D.txt
An electric current through a conductor will produce a magnetic field at right angles to the direction of electron flow. If that conductor is wrapped into a coil shape, the magnetic field produced will be oriented along the length of the coil. The greater the current, the greater the strength of the magnetic field, all other factors being equal: Inductors react against changes in current because of the energy stored in this magnetic field. When we construct a transformer from two inductor coils around a common iron core, we use this field to transfer energy from one coil to the other. However, there are simpler and more direct uses for electromagnetic fields than the applications we’ve seen with inductors and transformers. The magnetic field produced by a coil of current-carrying wire can be used to exert a mechanical force on any magnetic object, just as we can use a permanent magnet to attract magnetic objects, except that this magnet (formed by the coil) can be turned on or off by switching the current on or off through the coil. If we place a magnetic object near such a coil for the purpose of making that object move when we energize the coil with electric current, we have what is called a solenoid. The movable magnetic object is called an armature, and most armatures can be moved with either direct current (DC) or alternating current (AC) energizing the coil. The polarity of the magnetic field is irrelevant for the purpose of attracting an iron armature. Solenoids can be used to electrically open door latches, open or shut valves, move robotic limbs, and even actuate electric switch mechanisms. However, if a solenoid is used to actuate a set of switch contacts, we have a device so useful it deserves its own name: the relay. Relays are extremely useful when we have a need to control a large amount of current and/or voltage with a small electrical signal. The relay coil which produces the magnetic field may only consume fractions of a watt of power, while the contacts closed or opened by that magnetic field may be able to conduct hundreds of times that amount of power to a load. In effect, a relay acts as a binary (on or off) amplifier. Just as with transistors, the relay’s ability to control one electrical signal with another finds application in the construction of logic functions. This topic will be covered in greater detail in another lesson. For now, the relay’s “amplifying” ability will be explored. In the above schematic, the relay’s coil is energized by the low-voltage (12 VDC) source, while the single-pole, single-throw (SPST) contact interrupts the high-voltage (480 VAC) circuit. It is quite likely that the current required to energize the relay coil will be hundreds of times less than the current rating of the contact. Typical relay coil currents are well below 1 amp, while typical contact ratings for industrial relays are at least 10 amps. One relay coil/armature assembly may be used to actuate more than one set of contacts. Those contacts may be normally-open, normally-closed, or any combination of the two. As with switches, the “normal” state of a relay’s contacts is that state when the coil is de-energized, just as you would find the relay sitting on a shelf, not connected to any circuit. Relay contacts may be open-air pads of metal alloy, mercury tubes, or even magnetic reeds, just as with other types of switches. The choice of contacts in a relay depends on the same factors which dictate contact choice in other types of switches. Open-air contacts are the best for high-current applications, but their tendency to corrode and spark may cause problems in some industrial environments. Mercury and reed contacts are sparkless and won’t corrode, but they tend to be limited in current-carrying capacity. Shown here are three small relays (about two inches in height, each), installed on a panel as part of an electrical control system at a municipal water treatment plant: The relay units shown here are called “octal-base,” because they plug into matching sockets, the electrical connections secured via eight metal pins on the relay bottom. The screw terminal connections you see in the photograph where wires connect to the relays are actually part of the socket assembly, into which each relay is plugged. This type of construction facilitates easy removal and replacement of the relay(s) in the event of failure. Aside from the ability to allow a relatively small electrical signal to switch a relatively large electric signal, relays also offer electrical isolation between coil and contact circuits. This means that the coil circuit and contact circuit(s) are electrically insulated from one another. One circuit may be DC and the other AC (such as in the example circuit shown earlier), and/or they may be at completely different voltage levels, across the connections or from connections to ground. While relays are essentially binary devices, either being completely on or completely off, there are operating conditions where their state may be indeterminate, just as with semiconductor logic gates. In order for a relay to positively “pull in” the armature to actuate the contact(s), there must be a certain minimum amount of current through the coil. This minimum amount is called the pull-in current, and it is analogous to the minimum input voltage that a logic gate requires guaranteeing a “high” state (typically 2 Volts for TTL, 3.5 Volts for CMOS). Once the armature is pulled closer to the coil’s center, however, it takes less magnetic field flux (less coil current) to hold it there. Therefore, the coil current must drop below a value significantly lower than the pull-in current before the armature “drops out” to its spring-loaded position and the contacts resume their normal state. This current level is called the drop-out current, and it is analogous to the maximum input voltage that a logic gate input will allow guaranteeing a “low” state (typically 0.8 Volts for TTL, 1.5 Volts for CMOS). The hysteresis, or difference between pull-in and drop-out currents, results in operation that is similar to a Schmitt trigger logic gate. Pull-in and drop-out currents (and voltages) vary widely from relay to relay, and are specified by the manufacturer. Review • A solenoid is a device that produces mechanical motion from the energization of an electromagnet coil. The movable portion of a solenoid is called an armature. • A relay is a solenoid set up to actuate switch contacts when its coil is energized. • Pull-in current is the minimum amount of coil current needed to actuate a solenoid or relay from its “normal” (de-energized) position. • Drop-out current is the maximum coil current below which an energized relay will return to its “normal” state.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/05%3A_Electromechanical_Relays/5.01%3A_Relay_Construction.txt
All About Contactors When a relay is used to switch a large amount of electrical power through its contacts, it is designated by a special name: contactor. Contactors typically have multiple contacts, and those contacts are usually (but not always) normally-open, so that power to the load is shut off when the coil is de-energized. Perhaps the most common industrial use for contactors is the control of electric motors. The top three contacts switch the respective phases of the incoming 3-phase AC power, typically at least 480 Volts for motors 1 horsepower or greater. The lowest contact is an “auxiliary” contact which has a current rating much lower than that of the large motor power contacts, but is actuated by the same armature as the power contacts. The auxiliary contact is often used in a relay logic circuit, or for some other part of the motor control scheme, typically switching 120 Volt AC power instead of the motor voltage. One contactor may have several auxiliary contacts, either normally-open or normally-closed if required. The three “opposed-question-mark” shaped devices in series with each phase going to the motor are called overload heaters. Each “heater” element is a low-resistance strip of metal intended to heat up as the motor draws current. If the temperature of any of these heater elements reaches a critical point (equivalent to a moderate overloading of the motor), a normally-closed switch contact (not shown in the diagram) will spring open. This normally-closed contact is usually connected in series with the relay coil, so that when it opens the relay will automatically de-energize, thereby shutting off power to the motor. We will see more of this overload protection wiring in the next chapter. Overload heaters are intended to provide overcurrent protection for large electric motors, unlike circuit breakers and fuses which serve the primary purpose of providing overcurrent protection for power conductors. Overload heater function is often misunderstood. They are not fuses; that is, it is not their function to burn open and directly break the circuit as a fuse is designed to do. Rather, overload heaters are designed to thermally mimic the heating characteristic of the particular electric motor to be protected. All motors have thermal characteristics, including the amount of heat energy generated by resistive dissipation (I2R), the thermal transfer characteristics of heat “conducted” to the cooling medium through the metal frame of the motor, the physical mass and specific heat of the materials constituting the motor, etc. These characteristics are mimicked by the overload heater on a miniature scale: when the motor heats up toward its critical temperature, so will the heater toward its critical temperature, ideally at the same rate and approach curve. Thus, the overload contact, in sensing heater temperature with a thermomechanical mechanism, will sense an analog of the real motor. If the overload contact trips due to excessive heater temperature, it will be an indication that the real motor has reached its critical temperature (or, would have done so in a short while). After tripping, the heaters are supposed to cool down at the same rate and approach curve as the real motor, so that they indicate an accurate proportion of the motor’s thermal condition, and will not allow power to be re-applied until the motor is truly ready for start-up again. Three-Phase Electric Motor Contactor Shown here is a contactor for a three-phase electric motor, installed on a panel as part of an electrical control system at a municipal water treatment plant: Three-phase, 480 volt AC power comes into the three normally-open contacts at the top of the contactor via screw terminals labeled “L1,” “L2,” and “L3” (The “L2” terminal is hidden behind a square-shaped “snubber” circuit connected across the contactor’s coil terminals). Power to the motor exits the overload heater assembly at the bottom of this device via screw terminals labeled “T1,” “T2,” and “T3.” The overload heater units themselves are black, square-shaped blocks with the label “W34,” indicating a particular thermal response for a certain horsepower and temperature rating of the electric motor. If an electric motor of differing power and/or temperature ratings were to be substituted for the one presently in service, the overload heater units would have to be replaced with units having a thermal response suitable for the new motor. The motor manufacturer can provide information on the appropriate heater units to use. A white push button located between the “T1” and “T2” line heaters serves as a way to manually reset the normally-closed switch contact back to its normal state after having been tripped by excessive heater temperature. Wire connections to the “overload” switch contact may be seen at the lower-right of the photograph, near a label reading “NC” (normally-closed). On this particular overload unit, a small “window” with the label “Tripped” indicates a tripped condition by means of a colored flag. In this photograph, there is no “tripped” condition, and the indicator appears clear. As a footnote, heater elements may be used as a crude current shunt resistor for determining whether or not a motor is drawing current when the contactor is closed. There may be times when you’re working on a motor control circuit, where the contactor is located far away from the motor itself. How do you know if the motor is consuming power when the contactor coil is energized and the armature has been pulled in? If the motor’s windings are burnt open, you could be sending voltage to the motor through the contactor contacts, but still, have zero current, and thus no motion from the motor shaft. If a clamp-on ammeter isn’t available to measure line current, you can take your multimeter and measure millivoltage across each heater element: if the current is zero, the voltage across the heater will be zero (unless the heater element itself is open, in which case the voltage across it will be large); if there is current going to the motor through that phase of the contactor, you will read a definite millivoltage across that heater: This is an especially useful trick to use for troubleshooting 3-phase AC motors, to see if one phase winding is burnt open or disconnected, which will result in a rapidly destructive condition known as “single-phasing.” If one of the lines carrying power to the motor is open, it will not have any current through it (as indicated by a 0.00 mV reading across its heater), although the other two lines will (as indicated by small amounts of voltage dropped across the respective heaters). Review • A contactor is a large relay, usually used to switch current to an electric motor or another high-power load. • Large electric motors can be protected from overcurrent damage through the use of overload heatersand overload contacts. If the series-connected heaters get too hot from excessive current, the normally-closed overload contact will open, de-energizing the contactor sending power to the motor.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/05%3A_Electromechanical_Relays/5.02%3A_Contactors.txt
What are Time-Delay Relays? Some relays are constructed with a kind of “shock absorber” mechanism attached to the armature which prevents immediate, full motion when the coil is either energized or de-energized. This addition gives the relay the property of time-delay actuation. Time-delay relays can be constructed to delay armature motion on coil energization, de-energization, or both. Time-delay relay contacts must be specified not only as either normally-open or normally-closed but whether the delay operates in the direction of closing or in the direction of opening. The following is a description of the four basic types of time-delay relay contacts. Normally-Open, Timed-Closed Contact First, we have the normally-open, timed-closed (NOTC) contact. This type of contact is normally open when the coil is unpowered (de-energized). The contact is closed by the application of power to the relay coil, but only after the coil has been continuously powered for the specified amount of time. In other words, the direction of the contact’s motion (either to close or to open) is identical to a regular NO contact, but there is a delay in closing direction. Because the delay occurs in the direction of coil energization, this type of contact is alternatively known as a normally-open, on-delay: The following is a timing diagram of this relay contact’s operation: Normally-Open, Timed-Open Contact Next, we have the normally-open, timed-open (NOTO) contact. Like the NOTC contact, this type of contact is normally open when the coil is unpowered (de-energized), and closed by the application of power to the relay coil. However, unlike the NOTC contact, the timing action occurs upon de-energization of the coil rather than upon energization. Because the delay occurs in the direction of coil de-energization, this type of contact is alternatively known as a normally-open, off-delay: The following is a timing diagram of this relay contact’s operation: Normally-Closed, Timed-Open Contact Next, we have the normally-closed, timed-open (NCTO) contact. This type of contact is normally closed when the coil is unpowered (de-energized). The contact is opened with the application of power to the relay coil, but only after the coil has been continuously powered for the specified amount of time. In other words, the direction of the contact’s motion (either to close or to open) is identical to a regular NC contact, but there is a delay in the opening direction. Because the delay occurs in the direction of coil energization, this type of contact is alternatively known as a normally-closed, on-delay: The following is a timing diagram of this relay contact’s operation: Normally-Closed, Timed-Closed Contact Finally, we have the normally-closed, timed-closed (NCTC) contact. Like the NCTO contact, this type of contact is normally closed when the coil is unpowered (de-energized), and opened by the application of power to the relay coil. However, unlike the NCTO contact, the timing action occurs upon de-energization of the coil rather than upon energization. Because the delay occurs in the direction of coil de-energization, this type of contact is alternatively known as a normally-closed, off-delay: The following is a timing diagram of this relay contact’s operation: Time-Delay Relays Uses in Industrial Control Logic Circuits Time-delay relays are very important for use in industrial control logic circuits. Some examples of their use include: • Flashing light control (time on, time off): two time-delay relays are used in conjunction with one another to provide a constant-frequency on/off pulsing of contacts for sending intermittent power to a lamp. • Engine auto start control: Engines that are used to power emergency generators are often equipped with “autostart” controls that allow for automatic startup if the main electric power fails. To properly start a large engine, certain auxiliary devices must be started first and allowed some brief time to stabilize (fuel pumps, pre-lubrication oil pumps) before the engine’s starter motor is energized. Time-delay relays help sequence these events for proper start-up of the engine. • Furnace safety purge control: Before a combustion-type furnace can be safely lit, the air fan must be run for a specified amount of time to “purge” the furnace chamber of any potentially flammable or explosive vapors. A time-delay relay provides the furnace control logic with this necessary time element. • Motor soft-start delay control: Instead of starting large electric motors by switching full power from a dead stop condition, reduced voltage can be switched for a “softer” start and less inrush current. After a prescribed time delay (provided by a time-delay relay), full power is applied. • Conveyor belt sequence delay: when multiple conveyor belts are arranged to transport material, the conveyor belts must be started in reverse sequence (the last one first and the first one last) so that material doesn’t get piled on to a stopped or slow-moving conveyor. In order to get large belts up to full speed, some time may be needed (especially if soft-start motor controls are used). For this reason, there is usually a time-delay circuit arranged on each conveyor to give it adequate time to attain full belt speed before the next conveyor belt feeding it is started. Advanced Timer Features The older, mechanical time-delay relays used pneumatic dashpots or fluid-filled piston/cylinder arrangements to provide the “shock absorbing” needed to delay the motion of the armature. Newer designs of time-delay relays use electronic circuits with resistor-capacitor (RC) networks to generate a time delay, then energize a normal (instantaneous) electromechanical relay coil with the electronic circuit’s output. The electronic-timer relays are more versatile than the older, mechanical models, and less prone to failure. Many models provide advanced timer features such as “one-shot” (one measured output pulse for every transition of the input from de-energized to energized), “recycle” (repeated on/off output cycles for as long as the input connection is energized) and “watchdog” (changes state if the input signal does not repeatedly cycle on and off). “Watchdog” Timer Relays The “watchdog” timer is especially useful for monitoring of computer systems. If a computer is being used to control a critical process, it is usually recommended to have an automatic alarm to detect computer “lockup” (an abnormal halting of program execution due to any number of causes). An easy way to set up such a monitoring system is to have the computer regularly energize and de-energize the coil of a watchdog timer relay (similar to the output of the “recycle” timer). If the computer execution halts for any reason, the signal it outputs to the watchdog relay coil will stop cycling and freeze in one or the other state. A short time thereafter, the watchdog relay will “time out” and signal a problem. Review • Time delay relays are built in these four basic modes of contact operation: • 1: Normally-open, timed-closed. Abbreviated “NOTC”, these relays open immediately upon coil de-energization and close only if the coil is continuously energized for the time duration period. Also called normally-open, on-delay relays. • 2: Normally-open, timed-open. Abbreviated “NOTO”, these relays close immediately upon coil energization and open after the coil has been de-energized for the time duration period. Also called normally-open, off delay relays. • 3: Normally-closed, timed-open. Abbreviated “NCTO”, these relays close immediately upon coil de-energization and open only if the coil is continuously energized for the time duration period. Also called normally-closed, on-delay relays. • 4: Normally-closed, timed-closed. Abbreviated “NCTC”, these relays open immediately upon coil energization and close after the coil has been de-energized for the time duration period. Also called normally-closed, off delay relays. • One-shot timers provide a single contact pulse of specified duration for each coil energization (transition from coil off to coil on). • Recycle timers provide a repeating sequence of on-off contact pulses as long as the coil is maintained in an energized state. • Watchdog timers actuate their contacts only if the coil fails to be continuously sequenced on and off (energized and de-energized) at a minimum frequency.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/05%3A_Electromechanical_Relays/5.03%3A_Time-delay_Relays.txt
A special type of relay is one which monitors the current, voltage, frequency, or any other type of electric power measurement either from a generating source or to a load for the purpose of triggering a circuit breaker to open in the event of an abnormal condition. These relays are referred to in the electrical power industry as protective relays. The circuit breakers which are used to switch large quantities of electric power on and off are actually electromechanical relays, themselves. Unlike the circuit breakers found in residential and commercial use which determine when to trip (open) by means of a bimetallic strip inside that bends when it gets too hot from overcurrent, large industrial circuit breakers must be “told” by an external device when to open. Such breakers have two electromagnetic coils inside: one to close the breaker contacts and one to open them. The “trip” coil can be energized by one or more protective relays, as well as by hand switches, connected to switch 125 Volt DC power. DC power is used because it allows for a battery bank to supply close/trip power to the breaker control circuits in the event of a complete (AC) power failure. Protective relays can monitor large AC currents by means of current transformers (CT’s), which encircle the current-carrying conductors exiting a large circuit breaker, transformer, generator, or other devices. Current transformers step down the monitored current to a secondary (output) range of 0 to 5 amps AC to power the protective relay. The current relay uses this 0-5 amp signal to power its internal mechanism, closing a contact to switch 125 Volt DC power to the breaker’s trip coil if the monitored current becomes excessive. Likewise, (protective) voltage relays can monitor high AC voltages by means of voltage, or potential, transformers (PT’s) which step down the monitored voltage to a secondary range of 0 to 120 Volts AC, typically. Like (protective) current relays, this voltage signal powers the internal mechanism of the relay, closing a contact to switch 125 Volt DC power to the breaker’s trip coil is the monitored voltage becomes excessive. There are many types of protective relays, some with highly specialized functions. Not all monitor voltage or current, either. They all, however, share the common feature of outputting a contact closure signal which can be used to switch power to a breaker trip coil, close coil, or operator alarm panel. Most protective relay functions have been categorized into an ANSI standard number code. Here are a few examples from that code list: ANSI protective relay designation numbers Review • Large electric circuit breakers do not contain within themselves the necessary mechanisms to automatically trip (open) in the event of overcurrent conditions. They must be “told” to trip by external devices. • Protective relays are devices built to automatically trigger the actuation coils of large electric circuit breakers under certain conditions. 5.05: Solid-state Relays As versatile as electromechanical relays can be, they do suffer many limitations. They can be expensive to build, have a limited contact cycle life, take up a lot of room, and switch slowly, compared to modern semiconductor devices. These limitations are especially true for large power contactor relays. To address these limitations, many relay manufacturers offer “solid-state” relays, which use an SCR, TRIAC, or transistor output instead of mechanical contacts to switch the controlled power. The output device (SCR, TRIAC, or transistor) is optically-coupled to an LED light source inside the relay. The relay is turned on by energizing this LED, usually with low-voltage DC power. This optical isolation between input to output rivals the best that electromechanical relays can offer. Being solid-state devices, there are no moving parts to wear out, and they are able to switch on and off much faster than any mechanical relay armature can move. There is no sparking between contacts and no problems with contact corrosion. However, solid-state relays are still too expensive to build in very high current ratings, and so electromechanical contactors continue to dominate that application in the industry today. One significant advantage of a solid-state SCR or TRIAC relay over an electromechanical device is its natural tendency to open the AC circuit only at a point of zero load current. Because SCR’s and TRIAC’s are thyristors, their inherent hysteresis maintains circuit continuity after the LED is de-energized until the AC current falls below a threshold value (the holding current). In practical terms what this means is the circuit will never be interrupted in the middle of a sine wave peak. Such untimely interruptions in a circuit containing substantial inductance would normally produce large voltage spikes due to the sudden magnetic field collapse around the inductance. This will not happen in a circuit broken by an SCR or TRIAC. This feature is called zero-crossover switching. One disadvantage of solid state relays is their tendency to fail “shorted” on their outputs, while electromechanical relay contacts tend to fail “open.” In either case, it is possible for a relay to fail in the other mode, but these are the most common failures. Because a “fail-open” state is generally considered safer than a “fail-closed” state, electromechanical relays are still favored over their solid-state counterparts in many applications.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/05%3A_Electromechanical_Relays/5.04%3A_Protective_Relays.txt
Ladder diagrams are specialized schematics commonly used to document industrial control logic systems. They are called “ladder” diagrams because they resemble a ladder, with two vertical rails (supply power) and as many “rungs” (horizontal lines) as there are control circuits to represent. If we wanted to draw a simple ladder diagram showing a lamp that is controlled by a hand switch, it would look like The “L1” and “L2” designations refer to the two poles of a 120 VAC supply unless otherwise noted. L1 is the “hot” conductor, and L2 is the grounded (“neutral”) conductor. These designations have nothing to do with inductors, just to make things confusing. The actual transformer or generator supplying power to this circuit is omitted for simplicity. In reality, the circuit looks something like this: Typically in industrial relay logic circuits, but not always, the operating voltage for the switch contacts and relay coils will be 120 volts AC. Lower voltage AC and even DC systems are sometimes built and documented according to “ladder” diagrams: So long as the switch contacts and relay coils are all adequately rated, it really doesn’t matter what level of voltage is chosen for the system to operate with. Note the number “1” on the wire between the switch and the lamp. In the real world, that wire would be labeled with that number, using heat-shrink or adhesive tags, wherever it was convenient to identify. Wires leading to the switch would be labeled “L1” and “1,” respectively. Wires leading to the lamp would be labeled “1” and “L2,” respectively. These wire numbers make assembly and maintenance very easy. Each conductor has its own unique wire number for the control system that its used in. Wire numbers do not change at any junction or node, even if wire size, color, or length changes going into or out of a connection point. Of course, it is preferable to maintain consistent wire colors, but this is not always practical. What matters is that any one, electrically continuous point in a control circuit possesses the same wire number. Take this circuit section, for example, with wire #25 as a single, electrically continuous point threading to many different devices: In ladder diagrams, the load device (lamp, relay coil, solenoid coil, etc.) is almost always drawn at the right-hand side of the rung. While it doesn’t matter electrically where the relay coil is located within the rung, it does matter which end of the ladder’s power supply is grounded, for reliable operation. Take for instance this circuit: Here, the lamp (load) is located on the right-hand side of the rung, and so is the ground connection for the power source. This is no accident or coincidence; rather, it is a purposeful element of good design practice. Suppose that wire #1 were to accidentally come in contact with ground, the insulation of that wire having been rubbed off so that the bare conductor came in contact with grounded, metal conduit. Our circuit would now function like this: With both sides of the lamp connected to ground, the lamp will be “shorted out” and unable to receive power to light up. If the switch were to close, there would be a short-circuit, immediately blowing the fuse. However, consider what would happen to the circuit with the same fault (wire #1 coming in contact with ground), except this time we’ll swap the positions of switch and fuse (L2 is still grounded): This time the accidental grounding of wire #1 will force power to the lamp while the switch will have no effect. It is much safer to have a system that blows a fuse in the event of a ground fault than to have a system that uncontrollably energizes lamps, relays, or solenoids in the event of the same fault. For this reason, the load(s) must always be located nearest the grounded power conductor in the ladder diagram. Review • Ladder diagrams (sometimes called “ladder logic”) are a type of electrical notation and symbology frequently used to illustrate how electromechanical switches and relays are interconnected. • The two vertical lines are called “rails” and attach to opposite poles of a power supply, usually 120 volts AC. L1 designates the “hot” AC wire and L2 the “neutral” (grounded) conductor. • Horizontal lines in a ladder diagram are called “rungs,” each one representing a unique parallel circuit branch between the poles of the power supply. • Typically, wires in control systems are marked with numbers and/or letters for identification. The rule is, all permanently connected (electrically common) points must bear the same label.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/06%3A_Ladder_Logic/6.01%3A_Ladder_Diagrams.txt
We can construct simple logic functions for our hypothetical lamp circuit, using multiple contacts, and document these circuits quite easily and understandably with additional rungs to our original “ladder.” If we use standard binary notation for the status of the switches and lamp (0 for unactuated or de-energized; 1 for actuated or energized), a truth table can be made to show how the logic works: Now, the lamp will come on if either contact A or contact B is actuated, because all it takes for the lamp to be energized is to have at least one path for current from wire L1 to wire 1. What we have is a simple OR logic function, implemented with nothing more than contacts and a lamp. We can mimic the AND logic function by wiring the two contacts in series instead of parallel: Now, the lamp energizes only if contact A and contact B are simultaneously actuated. A path exists for current from wire L1 to the lamp (wire 2) if and only if both switch contacts are closed. The logical inversion, or NOT, function can be performed on a contact input simply by using a normally-closed contact instead of a normally-open contact: Now, the lamp energizes if the contact is not actuated, and de-energizes when the contact is actuated. If we take our OR function and invert each “input” through the use of normally-closed contacts, we will end up with a NAND function. In a special branch of mathematics known as Boolean algebra, this effect of gate function identity changing with the inversion of input signals is described by DeMorgan’s Theorem, a subject to be explored in more detail in a later chapter. The lamp will be energized if either contact is unactuated. It will go out only if both contacts are actuated simultaneously. Likewise, if we take our AND function and invert each “input” through the use of normally-closed contacts, we will end up with a NOR function: A pattern quickly reveals itself when ladder circuits are compared with their logic gate counterparts: • Parallel contacts are equivalent to an OR gate. • Series contacts are equivalent to an AND gate. • Normally-closed contacts are equivalent to a NOT gate (inverter). We can build combinational logic functions by grouping contacts in series-parallel arrangements, as well. In the following example, we have an Exclusive-OR function built from a combination of AND, OR, and inverter (NOT) gates: The top rung (NC contact A in series with NO contact B) is the equivalent of the top NOT/AND gate combination. The bottom rung (NO contact A in series with NC contact B) is the equivalent of the bottom NOT/AND gate combination. The parallel connection between the two rungs at wire number 2 forms the equivalent of the OR gate, in allowing either rung 1 or rung 2 to energize the lamp. To make the Exclusive-OR function, we had to use two contacts per input: one for direct input and the other for “inverted” input. The two “A” contacts are physically actuated by the same mechanism, as are the two “B” contacts. The common association between contacts is denoted by the label of the contact. There is no limit to how many contacts per switch can be represented in a ladder diagram, as each new contact on any switch or relay (either normally-open or normally-closed) used in the diagram is simply marked with the same label. Sometimes, multiple contacts on a single switch (or relay) are designated by a compound labels, such as “A-1” and “A-2” instead of two “A” labels. This may be especially useful if you want to specifically designate which set of contacts on each switch or relay is being used for which part of a circuit. For simplicity’s sake, I’ll refrain from such elaborate labeling in this lesson. If you see a common label for multiple contacts, you know those contacts are all actuated by the same mechanism. If we wish to invert the output of any switch-generated logic function, we must use a relay with a normally-closed contact. For instance, if we want to energize a load based on the inverse, or NOT, of a normally-open contact, we could do this: We will call the relay, “control relay 1,” or CR1. When the coil of CR1 (symbolized with the pair of parentheses on the first rung) is energized, the contact on the second rung opens, thus de-energizing the lamp. From switch A to the coil of CR1, the logic function is noninverted. The normally-closed contact actuated by relay coil CR1 provides a logical inverter function to drive the lamp opposite that of the switch’s actuation status. Applying this inversion strategy to one of our inverted-input functions created earlier, such as the OR-to-NAND, we can invert the output with a relay to create a noninverted function: From the switches to the coil of CR1, the logical function is that of a NAND gate. CR1‘s normally-closed contact provides one final inversion to turn the NAND function into an AND function. Review • Parallel contacts are logically equivalent to an OR gate. • Series contacts are logically equivalent to an AND gate. • Normally closed (N.C.) contacts are logically equivalent to a NOT gate. • A relay must be used to invert the output of a logic gate function, while simple normally-closed switch contacts are sufficient to represent inverted gate inputs.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/06%3A_Ladder_Logic/6.02%3A_Digital_Logic_Functions.txt
A practical application of switch and relay logic is in control systems where several process conditions have to be met before a piece of equipment is allowed to start. A good example of this is burner control for large combustion furnaces. In order for the burners in a large furnace to be started safely, the control system requests “permission” from several process switches, including high and low fuel pressure, air fan flow check, exhaust stack damper position, access door position, etc. Each process condition is called a permissive, and each permissive switch contact is wired in series, so that if any one of them detects an unsafe condition, the circuit will be opened: If all permissive conditions are met, CR1 will energize and the green lamp will be lit. In real life, more than just a green lamp would be energized: usually, a control relay or fuel valve solenoid would be placed in that rung of the circuit to be energized when all the permissive contacts were “good:” that is, all closed. If any one of the permissive conditions are not met, the series string of switch contacts will be broken, CR2 will de-energize, and the red lamp will light. Note that the high fuel pressure contact is normally-closed. This is because we want the switch contact to open if the fuel pressure gets too high. Since the “normal” condition of any pressure switch is when zero (low) pressure is being applied to it, and we want this switch to open with excessive (high) pressure, we must choose a switch that is closed in its normal state. Another practical application of relay logic is in control systems where we want to ensure two incompatible events cannot occur at the same time. An example of this is in reversible motor control, where two motor contactors are wired to switch polarity (or phase sequence) to an electric motor, and we don’t want the forward and reverse contactors energized simultaneously: When contactor M1 is energized, the 3 phases (A, B, and C) are connected directly to terminals 1, 2, and 3 of the motor, respectively. However, when contactor M2 is energized, phases A and B are reversed, A going to motor terminal 2 and B going to motor terminal 1. This reversal of phase wires results in the motor spinning the opposite direction. Let’s examine the control circuit for these two contactors: Take note of the normally-closed “OL” contact, which is the thermal overload contact activated by the “heater” elements wired in series with each phase of the AC motor. If the heaters get too hot, the contact will change from its normal (closed) state to being open, which will prevent either contactor from energizing. This control system will work fine, so long as no one pushes both buttons at the same time. If someone were to do that, phases A and B would be short-circuited together by virtue of the fact that contactor M1sends phases A and B straight to the motor and contactor M2 reverses them; phase A would be shorted to phase B and vice versa. Obviously, this is a bad control system design! To prevent this occurrence from happening, we can design the circuit so that the energization of one contactor prevents the energization of the other. This is called interlocking, and it is accomplished through the use of auxiliary contacts on each contactor, as such: Now, when M1 is energized, the normally-closed auxiliary contact on the second rung will be open, thus preventing M2 from being energized, even if the “Reverse” pushbutton is actuated. Likewise, M1‘s energization is prevented when M2 is energized. Note, as well, how additional wire numbers (4 and 5) were added to reflect the wiring changes. It should be noted that this is not the only way to interlock contactors to prevent a short-circuit condition. Some contactors come equipped with the option of a mechanical interlock: a lever joining the armatures of two contactors together so that they are physically prevented from simultaneous closure. For additional safety, electrical interlocks may still be used, and due to the simplicity of the circuit there is no good reason not to employ them in addition to mechanical interlocks. • REVIEW: • Switch contacts installed in a rung of ladder logic designed to interrupt a circuit if certain physical conditions are not met are called permissive contacts, because the system requires permission from these inputs to activate. • Switch contacts designed to prevent a control system from taking two incompatible actions at once (such as powering an electric motor forward and backward simultaneously) are called interlocks.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/06%3A_Ladder_Logic/6.03%3A_Permissive_and_Interlock_Circuits.txt
The interlock contacts installed in the previous section’s motor control circuit work fine, but the motor will run only as long as each push button switch is held down. If we wanted to keep the motor running even after the operator takes his or her hand off the control switch(es), we could change the circuit in a couple of different ways: we could replace the push button switches with toggle switches, or we could add some more relay logic to “latch” the control circuit with a single, momentary actuation of either switch. Let’s see how the second approach is implemented since it is commonly used in industry: When the “Forward” pushbutton is actuated, M1 will energize, closing the normally-open auxiliary contact in parallel with that switch. When the pushbutton is released, the closed M1 auxiliary contact will maintain current to the coil of M1, thus latching the “Forward” circuit in the “on” state. The same sort of thing will happen when the “Reverse” pushbutton is pressed. These parallel auxiliary contacts are sometimes referred to as seal-in contacts, the word “seal” meaning essentially the same thing as the word latch. However, this creates a new problem: how to stop the motor! As the circuit exists right now, the motor will run either forward or backward once the corresponding pushbutton switch is pressed and will continue to run as long as there is power. To stop either circuit (forward or backward), we require some means for the operator to interrupt power to the motor contactors. We’ll call this new switch, Stop: Now, if either forward or reverse circuits are latched, they may be “unlatched” by momentarily pressing the “Stop” pushbutton, which will open either forward or reverse circuit, de-energizing the energized contactor, and returning the seal-in contact to its normal (open) state. The “Stop” switch, having normally-closed contacts, will conduct power to either forward or reverse circuits when released. So far, so good. Let’s consider another practical aspect of our motor control scheme before we quit adding to it. If our hypothetical motor turned a mechanical load with a lot of momentum, such as a large air fan, the motor might continue to coast for a substantial amount of time after the stop button had been pressed. This could be problematic if an operator were to try to reverse the motor direction without waiting for the fan to stop turning. If the fan was still coasting forward and the “Reverse” pushbutton was pressed, the motor would struggle to overcome that inertia of the large fan as it tried to begin turning in reverse, drawing excessive current and potentially reducing the life of the motor, drive mechanisms, and fan. What we might like to have is some kind of a time-delay function in this motor control system to prevent such a premature startup from happening. Let’s begin by adding a couple of time-delay relay coils, one in parallel with each motor contactor coil. If we use contacts that delay returning to their normal state, these relays will provide us a “memory” of which direction the motor was last powered to turn. What we want each time-delay contact to do is to open the starting-switch leg of the opposite rotation circuit for several seconds, while the fan coasts to a halt. If the motor has been running in the forward direction, both M1 and TD1 will have been energized. This being the case, the normally-closed, timed-closed contact of TD1 between wires 8 and 5 will have immediately opened the moment TD1 was energized. When the stop button is pressed, contact TD1 waits for the specified amount of time before returning to its normally-closed state, thus holding the reverse pushbutton circuit open for the duration so M2 can’t be energized. When TD1 times out, the contact will close and the circuit will allow M2 to be energized if the reverse pushbutton is pressed. In like manner, TD2will prevent the “Forward” pushbutton from energizing M1 until the prescribed time delay after M2 (and TD2) have been de-energized. The careful observer will notice that the time-interlocking functions of TD1 and TD2 render the M1 and M2interlocking contacts redundant. We can get rid of auxiliary contacts M1 and M2 for interlocks and just use TD1 and TD2‘s contacts, since they immediately open when their respective relay coils are energized, thus “locking out” one contactor if the other is energized. Each time delay relay will serve a dual purpose: preventing the other contactor from energizing while the motor is running and preventing the same contactor from energizing until a prescribed time after motor shutdown. The resulting circuit has the advantage of being simpler than the previous example: Review • Motor contactor (or “starter”) coils are typically designated by the letter “M” in ladder logic diagrams. • Continuous motor operation with a momentary “start” switch is possible if a normally-open “seal-in” contact from the contactor is connected in parallel with the start switch so that once the contactor is energized it maintains power to itself and keeps itself “latched” on. • Time delay relays are commonly used in large motor control circuits to prevent the motor from being started (or reversed) until a certain amount of time has elapsed from an event.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/06%3A_Ladder_Logic/6.04%3A_Motor_Control_Circuits.txt
Logic circuits, whether comprised of electromechanical relays or solid-state gates, can be built in many different ways to perform the same functions. There is usually no one “correct” way to design a complex logic circuit, but there are usually ways that are better than others. In control systems, safety is (or at least should be) an important design priority. If there are multiple ways in which a digital control circuit can be designed to perform a task, and one of those ways happens to hold certain advantages in safety over the others, then that design is the better one to choose. Let’s take a look at a simple system and consider how it might be implemented in relay logic. Suppose that a large laboratory or industrial building is to be equipped with a fire alarm system, activated by any one of several latching switches installed throughout the facility. The system should work so that the alarm siren will energize if any one of the switches is actuated. At first glance, it seems as though the relay logic should be incredibly simple: just use normally-open switch contacts and connect them all in parallel with each other: Essentially, this is the OR logic function implemented with four switch inputs. We could expand this circuit to include any number of switch inputs, each new switch being added to the parallel network, but I’ll limit it to four in this example to keep things simple. At any rate, it is an elementary system and there seems to be little possibility of trouble. Except in the event of a wiring failure, that is. The nature of electric circuits is such that “open” failures (open switch contacts, broken wire connections, open relay coils, blown fuses, etc.) are statistically more likely to occur than any other type of failure. With that in mind, it makes sense to engineer a circuit to be as tolerant as possible to such a failure. Let’s suppose that a wire connection for Switch #2 were to fail open: If this failure were to occur, the result would be that Switch #2 would no longer energize the siren if actuated. This, obviously, is not good in a fire alarm system. Unless the system were regularly tested (a good idea anyway), no one would know there was a problem until someone tried to use that switch in an emergency. What if the system were re-engineered so as to sound the alarm in the event of an open failure? That way, a failure in the wiring would result in a false alarm, a scenario much more preferable than that of having a switch silently fail and not function when needed. In order to achieve this design goal, we would have to re-wire the switches so that an open contact sounded the alarm, rather than a closed contact. That being the case, the switches will have to be normally-closed and in series with each other, powering a relay coil which then activates a normally-closed contact for the siren: When all switches are unactuated (the regular operating state of this system), relay CR1 will be energized, thus keeping contact CR1 open, preventing the siren from being powered. However, if any of the switches are actuated, relay CR1 will de-energize, closing contact CR1 and sounding the alarm. Also, if there is a break in the wiring anywhere in the top rung of the circuit, the alarm will sound. When it is discovered that the alarm is false, the workers in the facility will know that something failed in the alarm system and that it needs to be repaired. Granted, the circuit is more complex than it was before the addition of the control relay, and the system could still fail in the “silent” mode with a broken connection in the bottom rung, but it’s still a safer design than the original circuit, and thus preferable from the standpoint of safety. This design of circuit is referred to as fail-safe, due to its intended design to default to the safest mode in the event of a common failure such as a broken connection in the switch wiring. Fail-safe design always starts with an assumption as to the most likely kind of wiring or component failure and then tries to configure things so that such a failure will cause the circuit to act in the safest way, the “safest way” being determined by the physical characteristics of the process. Take for example an electrically-actuated (solenoid) valve for turning on cooling water to a machine. Energizing the solenoid coil will move an armature which then either opens or closes the valve mechanism, depending on what kind of valve we specify. A spring will return the valve to its “normal” position when the solenoid is de-energized. We already know that an open failure in the wiring or solenoid coil is more likely than a short or any other type of failure, so we should design this system to be in its safest mode with the solenoid de-energized. If it’s cooling water we’re controlling with this valve, chances are it is safer to have the cooling water turn on in the event of a failure than to shut off, the consequences of a machine running without coolant usually being severe. This means we should specify a valve that turns on (opens up) when de-energized and turns off (closes down) when energized. This may seem “backwards” to have the valve set up this way, but it will make for a safer system in the end. One interesting application of fail-safe design is in the power generation and distribution industry, where large circuit breakers need to be opened and closed by electrical control signals from protective relays. If a 50/51 relay (instantaneous and time overcurrent) is going to command a circuit breaker to trip (open) in the event of excessive current, should we design it so that the relay closes a switch contact to send a “trip” signal to the breaker, or opens a switch contact to interrupt a regularly “on” signal to initiate a breaker trip? We know that an open connection will be the most likely to occur, but what is the safest state of the system: breaker open or breaker closed? At first, it would seem that it would be safer to have a large circuit breaker trip (open up and shut off power) in the event of an open fault in the protective relay control circuit, just like we had the fire alarm system default to an alarm state with any switch or wiring failure. However, things are not so simple in the world of high power. To have a large circuit breaker indiscriminately trip open is no small matter, especially when customers are depending on the continued supply of electric power to supply hospitals, telecommunications systems, water treatment systems, and other important infrastructures. For this reason, power system engineers have generally agreed to design protective relay circuits to output a closed contact signal (power applied) to open large circuit breakers, meaning that any open failure in the control wiring will go unnoticed, simply leaving the breaker in the status quo position. Is this an ideal situation? Of course not. If a protective relay detects an overcurrent condition while the control wiring is failed open, it will not be able to trip open the circuit breaker. Like the first fire alarm system design, the “silent” failure will be evident only when the system is needed. However, to engineer the control circuitry the other way—so that any open failure would immediately shut the circuit breaker off, potentially blacking out large potions of the power grid—really isn’t a better alternative. An entire book could be written on the principles and practices of good fail-safe system design. At least here, you know a couple of the fundamentals: that wiring tends to fail open more often than shorted, and that an electrical control system’s (open) failure mode should be such that it indicates and/or actuates the real-life process in the safest alternative mode. These fundamental principles extend to non-electrical systems as well: identify the most common mode of failure, then engineer the system so that the probable failure mode places the system in the safest condition. Review • The goal of fail-safe design is to make a control system as tolerant as possible to likely wiring or component failures. • The most common type of wiring and component failure is an “open” circuit, or broken connection. Therefore, a fail-safe system should be designed to default to its safest mode of operation in the case of an open circuit.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/06%3A_Ladder_Logic/6.05%3A_Fail-safe_Design.txt
Before the advent of solid-state logic circuits, logical control systems were designed and built exclusively around electromechanical relays. Relays are far from obsolete in modern design, but have been replaced in many of their former roles as logic-level control devices, relegated most often to those applications demanding high current and/or high voltage switching. Systems and processes requiring “on/off” control abound in modern commerce and industry, but such control systems are rarely built from either electromechanical relays or discrete logic gates. Instead, digital computers fill the need, which may be programmed to do a variety of logical functions. The History of Programmable Logic Controllers In the late 1960’s an American company named Bedford Associates released a computing device they called the MODICON. As an acronym, it meant Modular Digital Controller, and later became the name of a company division devoted to the design, manufacture, and sale of these special-purpose control computers. Other engineering firms developed their own versions of this device, and it eventually came to be known in non-proprietary terms as a PLC, or Programmable Logic Controller. The purpose of a PLC was to directly replace electromechanical relays as logic elements, substituting instead a solid-state digital computer with a stored program, able to emulate the interconnection of many relays to perform certain logical tasks. Ladder Logic and Programming PLCs A PLC has many “input” terminals, through which it interprets “high” and “low” logical states from sensors and switches. It also has many output terminals, through which it outputs “high” and “low” signals to power lights, solenoids, contactors, small motors, and other devices lending themselves to on/off control. In an effort to make PLCs easy to program, their programming language was designed to resemble ladder logic diagrams. Thus, an industrial electrician or electrical engineer accustomed to reading ladder logic schematics would feel comfortable programming a PLC to perform the same control functions. PLCs are industrial computers, and as such their input and output signals are typically 120 volts AC, just like the electromechanical control relays they were designed to replace. Although some PLCs have the ability to input and output low-level DC voltage signals of the magnitude used in logic gate circuits, this is the exception and not the rule. Signal connection and programming standards vary somewhat between different models of PLC, but they are similar enough to allow a “generic” introduction to PLC programming here. The following illustration shows a simple PLC, as it might appear from a front view. Two screw terminals provide connection to 120 volts AC for powering the PLC’s internal circuitry, labeled L1 and L2. Six screw terminals on the left-hand side provide connection to input devices, each terminal representing a different input “channel” with its own “X” label. The lower-left screw terminal is a “Common” connection, which is generally connected to L2 (neutral) of the 120 VAC power source. Inside the PLC housing, connected between each input terminal and the Common terminal, is an opto-isolator device (Light-Emitting Diode) that provides an electrically isolated “high” logic signal to the computer’s circuitry (a photo-transistor interprets the LED’s light) when there is 120 VAC power applied between the respective input terminal and the Common terminal. An indicating LED on the front panel of the PLC gives visual indication of an “energized” input: Output signals are generated by the PLC’s computer circuitry activating a switching device (transistor, TRIAC, or even an electromechanical relay), connecting the “Source” terminal to any of the “Y-” labeled output terminals. The “Source” terminal, correspondingly, is usually connected to the L1 side of the 120 VAC power source. As with each input, an indicating LED on the front panel of the PLC gives visual indication of an “energized” output: In this way, the PLC is able to interface with real-world devices such as switches and solenoids. The actual logic of the control system is established inside the PLC by means of a computer program. This program dictates which output gets energized under which input conditions. Although the program itself appears to be a ladder logic diagram, with switch and relay symbols, there are no actual switch contacts or relay coils operating inside the PLC to create the logical relationships between input and output. These are imaginary contacts and coils, if you will. The program is entered and viewed via a personal computer connected to the PLC’s programming port. Consider the following circuit and PLC program: When the pushbutton switch is unactuated (unpressed), no power is sent to the X1 input of the PLC. Following the program, which shows a normally-open X1 contact in series with a Y1 coil, no “power” will be sent to the Y1 coil. Thus, the PLC’s Y1 output remains de-energized, and the indicator lamp connected to it remains dark. If the pushbutton switch is pressed, however, power will be sent to the PLC’s X1 input. Any and all X1 contacts appearing in the program will assume the actuated (non-normal) state, as though they were relay contacts actuated by the energizing of a relay coil named “X1”. In this case, energizing the X1 input will cause the normally-open X1 contact will “close,” sending “power” to the Y1 coil. When the Y1 coil of the program “energizes,” the real Y1 output will become energized, lighting up the lamp connected to it: It must be understood that the X1 contact, Y1 coil, connecting wires, and “power” appearing in the personal computer’s display are all virtual. They do not exist as real electrical components. They exist as commands in a computer program—a piece of software only—that just happens to resemble a real relay schematic diagram. Equally important to understand is that the personal computer used to display and edit the PLC’s program is not necessary for the PLC’s continued operation. Once a program has been loaded to the PLC from the personal computer, the personal computer may be unplugged from the PLC, and the PLC will continue to follow the programmed commands. I include the personal computer display in these illustrations for your sake only, in aiding to understand the relationship between real-life conditions (switch closure and lamp status) and the program’s status (“power” through virtual contacts and virtual coils). Control System Behavior The true power and versatility of a PLC is revealed when we want to alter the behavior of a control system. Since the PLC is a programmable device, we can alter its behavior by changing the commands we give it, without having to reconfigure the electrical components connected to it. For example, suppose we wanted to make this switch-and-lamp circuit function in an inverted fashion: push the button to make the lamp turn off, and release it to make it turn on. The “hardware” solution would require that a normally-closed pushbutton switch be substituted for the normally-open switch currently in place. The “software” solution is much easier: just alter the program so that contact X1 is normally-closed rather than normally-open. In the following illustration, we have the altered system shown in the state where the pushbutton is unactuated (not being pressed): In this next illustration, the switch is shown actuated (pressed): One of the advantages of implementing logical control in software rather than in hardware is that input signals can be re-used as many times in the program as is necessary. For example, take the following circuit and program, designed to energize the lamp if at least two of the three pushbutton switches are simultaneously actuated: To build an equivalent circuit using electromechanical relays, three relays with two normally-open contacts each would have to be used, to provide two contacts per input switch. Using a PLC, however, we can program as many contacts as we wish for each “X” input without adding additional hardware, since each input and each output is nothing more than a single bit in the PLC’s digital memory (either 0 or 1), and can be recalled as many times as necessary. Furthermore, since each output in the PLC is nothing more than a bit in its memory as well, we can assign contacts in a PLC program “actuated” by an output (Y) status. Take for instance this next system, a motor start-stop control circuit: The pushbutton switch connected to input X1 serves as the “Start” switch, while the switch connected to input X2 serves as the “Stop.” Another contact in the program, named Y1, uses the output coil status as a seal-in contact, directly, so that the motor contactor will continue to be energized after the “Start” pushbutton switch is released. You can see the normally-closed contact X2 appear in a colored block, showing that it is in a closed (“electrically conducting”) state. If we were to press the “Start” button, input X1 would energize, thus “closing” the X1 contact in the program, sending “power” to the Y1 “coil,” energizing the Y1 output and applying 120 volt AC power to the real motor contactor coil. The parallel Y1 contact will also “close,” thus latching the “circuit” in an energized state: Now, if we release the “Start” pushbutton, the normally-open X1 “contact” will return to its “open” state, but the motor will continue to run because the Y1 seal-in “contact” continues to provide “continuity” to “power” coil Y1, thus keeping the Y1 output energized: To stop the motor, we must momentarily press the “Stop” pushbutton, which will energize the X2 input and “open” the normally-closed “contact,” breaking continuity to the Y1 “coil:” When the “Stop” pushbutton is released, input X2 will de-energize, returning “contact” X2 to its normal, “closed” state. The motor, however, will not start again until the “Start” pushbutton is actuated, because the “seal-in” of Y1 has been lost: Fail-safe Design in PLC-Controlled Systems An important point to make here is that fail-safe design is just as important in PLC-controlled systems as it is in electromechanical relay-controlled systems. One should always consider the effects of failed (open) wiring on the device or devices being controlled. In this motor control circuit example, we have a problem: if the input wiring for X2 (the “Stop” switch) were to fail open, there would be no way to stop the motor! The solution to this problem is a reversal of logic between the X2 “contact” inside the PLC program and the actual “Stop” pushbutton switch: When the normally-closed “Stop” pushbutton switch is unactuated (not pressed), the PLC’s X2 input will be energized, thus “closing” the X2 “contact” inside the program. This allows the motor to be started when input X1 is energized, and allows it to continue to run when the “Start” pushbutton is no longer pressed. When the “Stop” pushbutton is actuated, input X2 will de-energize, thus “opening” the X2 “contact” inside the PLC program and shutting off the motor. So, we see there is no operational difference between this new design and the previous design. However, if the input wiring on input X2 were to fail open, X2 input would de-energize in the same manner as when the “Stop” pushbutton is pressed. The result, then, for a wiring failure on the X2 input is that the motor will immediately shut off. This is a safer design than the one previously shown, where a “Stop” switch wiring failure would have resulted in an inability to turn off the motor. In addition to input (X) and output (Y) program elements, PLCs provide “internal” coils and contacts with no intrinsic connection to the outside world. These are used much the same as “control relays” (CR1, CR2, etc.) are used in standard relay circuits: to provide logic signal inversion when necessary. To demonstrate how one of these “internal” relays might be used, consider the following example circuit and program, designed to emulate the function of a three-input NAND gate. Since PLC program elements are typically designed by single letters, I will call the internal control relay “C1” rather than “CR1” as would be customary in a relay control circuit: In this circuit, the lamp will remain lit so long as any of the pushbuttons remain unactuated (unpressed). To make the lamp turn off, we will have to actuate (press) all three switches, like this: Advanced PLC Functionality This section on programmable logic controllers illustrates just a small sample of their capabilities. As computers, PLCs can perform timing functions (for the equivalent of time-delay relays), drum sequencing, and other advanced functions with far greater accuracy and reliability than what is possible using electromechanical logic devices. Most PLCs have the capacity for far more than six inputs and six outputs. The following photograph shows several input and output modules of a single Allen-Bradley PLC. With each module having sixteen “points” of either input or output, this PLC has the ability to monitor and control dozens of devices. Fit into a control cabinet, a PLC takes up little room, especially considering the equivalent space that would be needed by electromechanical relays to perform the same functions: Remote Monitoring and Control of PLCs Via Digital Computer Networks One advantage of PLCs that simply cannot be duplicated by electromechanical relays is remote monitoring and control via digital computer networks. Because a PLC is nothing more than a special-purpose digital computer, it has the ability to communicate with other computers rather easily. The following photograph shows a personal computer displaying a graphic image of a real liquid-level process (a pumping, or “lift,” station for a municipal wastewater treatment system) controlled by a PLC. The actual pumping station is located miles away from the personal computer display:
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/06%3A_Ladder_Logic/6.06%3A_Programmable_Logic_Controllers_%28PLC%29.txt
All arithmetic operations performed with Boolean quantities have but one of two possible outcomes: either 1 or 0. There is no such thing as “2” or “-1” or “1/2” in the Boolean world. It is a world in which all other possibilities are invalid by fiat. As one might guess, this is not the kind of math you want to use when balancing a checkbook or calculating current through a resistor. However, Claude Shannon of MIT fame recognized how Boolean algebra could be applied to on-and-off circuits, where all signals are characterized as either “high” (1) or “low” (0). • 7.1: Introduction to Boolean Algebra • 7.2: Boolean Arithmetic In Boolean mathematics, addition is equivalent to the OR logic function, multiplication is equivalent to the AND logic function, and complementation is equivalent to the NOT logic function. • 7.3: Boolean Algebraic Identities In mathematics, an identity is a statement true for all possible values of its variable or variables. The algebraic identity of x + 0 = x tells us that anything (x) added to zero equals the original “anything,” no matter what value that “anything” (x) may be. Like ordinary algebra, Boolean algebra has its own unique identities based on the bivalent states of Boolean variables. • 7.4: Boolean Algebraic Properties The commutative, associative, and distributive properties apply to Boolean algebra. • 7.5: Boolean Rules for Simplification Boolean algebra finds its most practical use in the simplification of logic circuits. If we translate a logic circuit’s function into symbolic (Boolean) form, and apply certain algebraic rules to the resulting equation to reduce the number of terms and/or arithmetic operations, the simplified equation may be translated back into circuit form for a logic circuit performing the same function with fewer components. • 7.6: Circuit Simplification Examples • 7.7: The Exclusive-OR Function - The XOR Gate One element conspicuously missing from the set of Boolean operations is that of Exclusive-OR, often represented as XOR. Whereas the OR function is equivalent to Boolean addition, the AND function to Boolean multiplication, and the NOT function (inverter) to Boolean complementation, there is no direct Boolean equivalent for Exclusive-OR. This hasn’t stopped people from developing a symbol to represent this logic gate, though. • 7.8: DeMorgan’s Theorems A mathematician named DeMorgan developed a pair of important rules regarding group complementation in Boolean algebra. By group complementation, I’m referring to the complement of a group of terms, represented by a long bar over more than one variable. • 7.9: Converting Truth Tables into Boolean Expressions In designing digital circuits, the designer often begins with a truth table describing what the circuit should do. The design task is largely to determine what type of circuit will perform the function described in the truth table. There are procedural techniques available and Boolean algebra proves its utility in a most dramatic way. 07: Boolean Algebra Mathematical rules are based on the defining limits we place on the particular numerical quantities dealt with. When we say that 1 + 1 = 2 or 3 + 4 = 7, we are implying the use of integer quantities: the same types of numbers we all learned to count in elementary education. What most people assume to be self-evident rules of arithmetic—valid at all times and for all purposes—actually depend on what we define a number to be. For instance, when calculating quantities in AC circuits, we find that the “real” number quantities which served us so well in DC circuit analysis are inadequate for the task of representing AC quantities. We know that voltages add when connected in series, but we also know that it is possible to connect a 3-volt AC source in series with a 4-volt AC source and end up with 5 volts total voltage (3 + 4 = 5)! Does this mean the inviolable and self-evident rules of arithmetic have been violated? No, it just means that the rules of “real” numbers do not apply to the kinds of quantities encountered in AC circuits, where every variable has both a magnitude and a phase. Consequently, we must use a different kind of numerical quantity, or object, for AC circuits (complex numbers, rather than real numbers), and along with this different system of numbers comes a different set of rules telling us how they relate to one another. An expression such as “3 + 4 = 5” is nonsense within the scope and definition of real numbers, but it fits nicely within the scope and definition of complex numbers (think of a right triangle with opposite and adjacent sides of 3 and 4, with a hypotenuse of 5). Because complex numbers are two-dimensional, they are able to “add” with one another trigonometrically as single-dimension “real” numbers cannot. Mathematical Laws and “Fuzzy Logic” Logic is much like mathematics in this respect: the so-called “Laws” of logic depend on how we define what a proposition is. The Greek philosopher Aristotle founded a system of logic based on only two types of propositions: true and false. His bivalent (two-mode) definition of truth led to the four foundational laws of logic: the Law of Identity (A is A); the Law of Non-contradiction (A is not non-A); the Law of the Excluded Middle (either A or non-A); and the Law of Rational Inference. These so-called Laws function within the scope of logic where a proposition is limited to one of two possible values, but may not apply in cases where propositions can hold values other than “true” or “false.” In fact, much work has been done and continues to be done on “multivalued,” or fuzzy logic, where propositions may be true or false to a limited degree. In such a system of logic, “Laws” such as the Law of the Excluded Middle simply do not apply, because they are founded on the assumption of bivalence. Likewise, many premises which would violate the Law of Non-contradiction in Aristotelian logic have validity in “fuzzy” logic. Again, the defining limits of propositional values determine the Laws describing their functions and relations. The Birth of Boolean Algebra The English mathematician George Boole (1815-1864) sought to give symbolic form to Aristotle’s system of logic. Boole wrote a treatise on the subject in 1854, titled An Investigation of the Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and Probabilities, which codified several rules of relationship between mathematical quantities limited to one of two possible values: true or false, 1 or 0. His mathematical system became known as Boolean algebra. All arithmetic operations performed with Boolean quantities have but one of two possible outcomes: either 1 or 0. There is no such thing as “2” or “-1” or “1/2” in the Boolean world. It is a world in which all other possibilities are invalid by fiat. As one might guess, this is not the kind of math you want to use when balancing a checkbook or calculating current through a resistor. However, Claude Shannon of MIT fame recognized how Boolean algebra could be applied to on-and-off circuits, where all signals are characterized as either “high” (1) or “low” (0). His 1938 thesis, titled A Symbolic Analysis of Relay and Switching Circuits, put Boole’s theoretical work to use in a way Boole could never have imagined, giving us a powerful mathematical tool for designing and analyzing digital circuits. Boolean Algebra vs. “Normal Algebra” In this chapter, you will find a lot of similarities between Boolean algebra and “normal” algebra, the kind of algebra involving so-called real numbers. Just bear in mind that the system of numbers defining Boolean algebra is severely limited in terms of scope, and that there can only be one of two possible values for any Boolean variable: 1 or 0. Consequently, the “Laws” of Boolean algebra often differ from the “Laws” of real-number algebra, making possible such statements as 1 + 1 = 1, which would normally be considered absurd. Once you comprehend the premise of all quantities in Boolean algebra being limited to the two possibilities of 1 and 0, and the general philosophical principle of Laws depending on quantitative definitions, the “nonsense” of Boolean algebra disappears. Boolean Numbers vs. Binary Numbers It should be clearly understood that Boolean numbers are not the same as binary numbers. Whereas Boolean numbers represent an entirely different system of mathematics from real numbers, binary is nothing more than an alternative notation for real numbers. The two are often confused because both Boolean math and binary notation use the same two ciphers: 1 and 0. The difference is that Boolean quantities are restricted to a single bit (either 1 or 0), whereas binary numbers may be composed of many bits adding up in place-weighted form to a value of any finite size. The binary number 100112 (“nineteen”) has no more place in the Boolean world than the decimal number 210 (“two”) or the octal number 328 (“twenty-six”).
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/07%3A_Boolean_Algebra/7.01%3A_Introduction_to_Boolean_Algebra.txt
Let us begin our exploration of Boolean algebra by adding numbers together: The first three sums make perfect sense to anyone familiar with elementary addition. The last sum, though, is quite possibly responsible for more confusion than any other single statement in digital electronics, because it seems to run contrary to the basic principles of mathematics. Well, it does contradict principles of addition for real numbers, but not for Boolean numbers. Remember that in the world of Boolean algebra, there are only two possible values for any quantity and for any arithmetic operation: 1 or 0. There is no such thing as “2” within the scope of Boolean values. Since the sum “1 + 1” certainly isn’t 0, it must be 1 by process of elimination. It does not matter how many or few terms we add together, either. Consider the following sums: Take a close look at the two-term sums in the first set of equations. Does that pattern look familiar to you? It should! It is the same pattern of 1’s and 0’s as seen in the truth table for an OR gate. In other words, Boolean addition corresponds to the logical function of an “OR” gate, as well as to parallel switch contacts: There is no such thing as subtraction in the realm of Boolean mathematics. Subtraction implies the existence of negative numbers: 5 - 3 is the same thing as 5 + (-3), and in Boolean algebra negative quantities are forbidden. There is no such thing as division in Boolean mathematics, either, since division is really nothing more than compounded subtraction, in the same way that multiplication is compounded addition. Multiplication is valid in Boolean algebra, and thankfully it is the same as in real-number algebra: anything multiplied by 0 is 0, and anything multiplied by 1 remains unchanged: This set of equations should also look familiar to you: it is the same pattern found in the truth table for an AND gate. In other words, Boolean multiplication corresponds to the logical function of an “AND” gate, as well as to series switch contacts: Like “normal” algebra, Boolean algebra uses alphabetical letters to denote variables. Unlike “normal” algebra, though, Boolean variables are always CAPITAL letters, never lower-case. Because they are allowed to possess only one of two possible values, either 1 or 0, each and every variable has a complement: the opposite of its value. For example, if variable “A” has a value of 0, then the complement of A has a value of 1. Boolean notation uses a bar above the variable character to denote complementation, like this: In written form, the complement of “A” is denoted as “A-not” or “A-bar”. Sometimes a “prime” symbol is used to represent complementation. For example, A’ would be the complement of A, much the same as using a prime symbol to denote differentiation in calculus rather than the fractional notation d/dt. Usually, though, the “bar” symbol finds more widespread use than the “prime” symbol, for reasons that will become more apparent later in this chapter. Boolean complementation finds equivalency in the form of the NOT gate, or a normally-closed switch or relay contact: The basic definition of Boolean quantities has led to the simple rules of addition and multiplication, and has excluded both subtraction and division as valid arithmetic operations. We have a symbology for denoting Boolean variables, and their complements. In the next section we will proceed to develop Boolean identities. Review • Boolean addition is equivalent to the OR logic function, as well as parallel switch contacts. • Boolean multiplication is equivalent to the AND logic function, as well as series switch contacts. • Boolean complementation is equivalent to the NOT logic function, as well as normally-closed relay contacts.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/07%3A_Boolean_Algebra/7.02%3A_Boolean_Arithmetic.txt
In mathematics, an identity is a statement true for all possible values of its variable or variables. The algebraic identity of x + 0 = x tells us that anything (x) added to zero equals the original “anything,” no matter what value that “anything” (x) may be. Like ordinary algebra, Boolean algebra has its own unique identities based on the bivalent states of Boolean variables. The first Boolean identity is that the sum of anything and zero is the same as the original “anything.” This identity is no different from its real-number algebraic equivalent: No matter what the value of A, the output will always be the same: when A=1, the output will also be 1; when A=0, the output will also be 0. The next identity is most definitely different from any seen in normal algebra. Here we discover that the sum of anything and one is one: No matter what the value of A, the sum of A and 1 will always be 1. In a sense, the “1” signal overrides the effect of A on the logic circuit, leaving the output fixed at a logic level of 1. Next, we examine the effect of adding A and A together, which is the same as connecting both inputs of an OR gate to each other and activating them with the same signal: In real-number algebra, the sum of two identical variables is twice the original variable’s value (x + x = 2x), but remember that there is no concept of “2” in the world of Boolean math, only 1 and 0, so we cannot say that A + A = 2A. Thus, when we add a Boolean quantity to itself, the sum is equal to the original quantity: 0 + 0 = 0, and 1 + 1 = 1. Introducing the uniquely Boolean concept of complementation into an additive identity, we find an interesting effect. Since there must be one “1” value between any variable and its complement, and since the sum of any Boolean quantity and 1 is 1, the sum of a variable and its complement must be 1: Just as there are four Boolean additive identities (A+0, A+1, A+A, and A+A’), so there are also four multiplicative identities: Ax0, Ax1, AxA, and AxA’. Of these, the first two are no different from their equivalent expressions in regular algebra: The third multiplicative identity expresses the result of a Boolean quantity multiplied by itself. In normal algebra, the product of a variable and itself is the square of that variable (3 x 3 = 32 = 9). However, the concept of “square” implies a quantity of 2, which has no meaning in Boolean algebra, so we cannot say that A x A = A2. Instead, we find that the product of a Boolean quantity and itself is the original quantity, since 0 x 0 = 0 and 1 x 1 = 1: The fourth multiplicative identity has no equivalent in regular algebra because it uses the complement of a variable, a concept unique to Boolean mathematics. Since there must be one “0” value between any variable and its complement, and since the product of any Boolean quantity and 0 is 0, the product of a variable and its complement must be 0: To summarize, then, we have four basic Boolean identities for addition and four for multiplication: Another identity having to do with complementation is that of the double complement: a variable inverted twice. Complementing a variable twice (or any even number of times) results in the original Boolean value. This is analogous to negating (multiplying by -1) in real-number algebra: an even number of negations cancel to leave the original value:
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/07%3A_Boolean_Algebra/7.03%3A_Boolean_Algebraic_Identities.txt
The Commutative Property Another type of mathematical identity, called a “property” or a “law,” describes how differing variables relate to each other in a system of numbers. One of these properties is known as the commutative property, and it applies equally to addition and multiplication. In essence, the commutative property tells us we can reverse the order of variables that are either added together or multiplied together without changing the truth of the expression: The Associative Property Along with the commutative properties of addition and multiplication, we have the associative property, again applying equally well to addition and multiplication. This property tells us we can associate groups of added or multiplied variables together with parentheses without altering the truth of the equations. The Distributive Property Lastly, we have the distributive property, illustrating how to expand a Boolean expression formed by the product of a sum, and in reverse shows us how terms may be factored out of Boolean sums-of-products: To summarize, here are the three basic properties: commutative, associative, and distributive. 7.05: Boolean Rules for Simplification Boolean algebra finds its most practical use in the simplification of logic circuits. If we translate a logic circuit’s function into symbolic (Boolean) form, and apply certain algebraic rules to the resulting equation to reduce the number of terms and/or arithmetic operations, the simplified equation may be translated back into circuit form for a logic circuit performing the same function with fewer components. If equivalent function may be achieved with fewer components, the result will be increased reliability and decreased cost of manufacture. To this end, there are several rules of Boolean algebra presented in this section for use in reducing expressions to their simplest forms. The identities and properties already reviewed in this chapter are very useful in Boolean simplification, and for the most part bear similarity to many identities and properties of “normal” algebra. However, the rules shown in this section are all unique to Boolean mathematics. This rule may be proven symbolically by factoring an “A” out of the two terms, then applying the rules of A + 1 = 1 and 1A = A to achieve the final result: Please note how the rule A + 1 = 1 was used to reduce the (B + 1) term to 1. When a rule like “A + 1 = 1” is expressed using the letter “A”, it doesn’t mean it only applies to expressions containing “A”. What the “A” stands for in a rule like A + 1 = 1 is any Boolean variable or collection of variables. This is perhaps the most difficult concept for new students to master in Boolean simplification: applying standardized identities, properties, and rules to expressions not in standard form. For instance, the Boolean expression ABC + 1 also reduces to 1 by means of the “A + 1 = 1” identity. In this case, we recognize that the “A” term in the identity’s standard form can represent the entire “ABC” term in the original expression. The next rule looks similar to the first one shown in this section, but is actually quite different and requires a more clever proof: Note how the last rule (A + AB = A) is used to “un-simplify” the first “A” term in the expression, changing the “A” into an “A + AB”. While this may seem like a backward step, it certainly helped to reduce the expression to something simpler! Sometimes in mathematics we must take “backward” steps to achieve the most elegant solution. Knowing when to take such a step and when not to is part of the art-form of algebra, just as a victory in a game of chess almost always requires calculated sacrifices. Another rule involves the simplification of a product-of-sums expression: And, the corresponding proof: To summarize, here are the three new rules of Boolean simplification expounded in this section:
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/07%3A_Boolean_Algebra/7.04%3A_Boolean_Algebraic_Properties.txt
Let’s begin with a semiconductor gate circuit in need of simplification. The “A,” “B,” and “C” input signals are assumed to be provided from switches, sensors, or perhaps other gate circuits. Where these signals originate is of no concern in the task of gate reduction. How to Write a Boolean Expression to Simplify Circuits Our first step in simplification must be to write a Boolean expression for this circuit. This task is easily performed step by step if we start by writing sub-expressions at the output of each gate, corresponding to the respective input signals for each gate. Remember that OR gates are equivalent to Boolean addition, while AND gates are equivalent to Boolean multiplication. For example, I’ll write sub-expressions at the outputs of the first three gates: ...then another sub-expression for the next gate: Finally, the output (“Q”) is seen to be equal to the expression AB + BC(B + C): Now that we have a Boolean expression to work with, we need to apply the rules of Boolean algebra to reduce the expression to its simplest form (simplest defined as requiring the fewest gates to implement): The final expression, B(A + C), is much simpler than the original, yet performs the same function. If you would like to verify this, you may generate a truth table for both expressions and determine Q’s status (the circuits’ output) for all eight logic-state combinations of A, B, and C, for both circuits. The two truth tables should be identical. Generating Schematic Diagrams from Boolean Expressions Now, we must generate a schematic diagram from this Boolean expression. To do this, evaluate the expression, following proper mathematical order of operations (multiplication before addition, operations inside parentheses before anything else), and draw gates for each step. Remember again that OR gates are equivalent to Boolean addition, while AND gates are equivalent to Boolean multiplication. In this case, we would begin with the sub-expression “A + C”, which is an OR gate: The next step in evaluating the expression “B(A + C)” is to multiply (AND gate) the signal B by the output of the previous gate (A + C): Obviously, this circuit is much simpler than the original, having only two logic gates instead of five. Such component reduction results in higher operating speed (less delay time from input signal transition to output signal transition), less power consumption, less cost, and greater reliability. How to Use Boolean Simplification for Electromechanical Relay Circuits Electromechanical relay circuits, typically being slower, consuming more electrical power to operate, costing more, and having a shorter average life than their semiconductor counterparts, benefit dramatically from Boolean simplification. Let’s consider an example circuit: As before, our first step in reducing this circuit to its simplest form must be to develop a Boolean expression from the schematic. The easiest way I’ve found to do this is to follow the same steps I’d normally follow to reduce a series-parallel resistor network to a single, total resistance. For example, examine the following resistor network with its resistors arranged in the same connection pattern as the relay contacts in the former circuit, and corresponding total resistance formula: Remember that parallel contacts are equivalent to Boolean addition, while series contacts are equivalent to Boolean multiplication. Write a Boolean expression for this relay contact circuit, following the same order of precedence that you would follow in reducing a series-parallel resistor network to a total resistance. It may be helpful to write a Boolean sub-expression to the left of each ladder “rung,” to help organize your expression-writing: Now that we have a Boolean expression to work with, we need to apply the rules of Boolean algebra to reduce the expression to its simplest form (simplest defined as requiring the fewest relay contacts to implement): The more mathematically inclined should be able to see that the two steps employing the rule “A + AB = A” may be combined into a single step, the rule being expandable to: “A + AB + AC + AD + . . . = A” As you can see, the reduced circuit is much simpler than the original, yet performs the same logical function: Review • To convert a gate circuit to a Boolean expression, label each gate output with a Boolean sub-expression corresponding to the gates’ input signals, until a final expression is reached at the last gate. • To convert a Boolean expression to a gate circuit, evaluate the expression using standard order of operations: multiplication before addition, and operations within parentheses before anything else. • To convert a ladder logic circuit to a Boolean expression, label each rung with a Boolean sub-expression corresponding to the contacts’ input signals, until a final expression is reached at the last coil or light. To determine proper order of evaluation, treat the contacts as though they were resistors, and as if you were determining total resistance of the series-parallel network formed by them. In other words, look for contacts that are either directly in series or directly in parallel with each other first, then “collapse” them into equivalent Boolean sub-expressions before proceeding to other contacts. • To convert a Boolean expression to a ladder logic circuit, evaluate the expression using standard order of operations: multiplication before addition, and operations within parentheses before anything else. 7.07: The Exclusive-OR Function - The XOR Gate What Is a XOR Gate? One element conspicuously missing from the set of Boolean operations is that of Exclusive-OR, often represented as XOR. Whereas the OR function is equivalent to Boolean addition, the AND function to Boolean multiplication, and the NOT function (inverter) to Boolean complementation, there is no direct Boolean equivalent for Exclusive-OR. This hasn’t stopped people from developing a symbol to represent this logic gate, though: This logic gate symbol is seldom used in Boolean expressions because the identities, laws, and rules of simplification involving addition, multiplication, and complementation do not apply to it. However, there is a way to represent the Exclusive-OR function in terms of OR and AND, as has been shown in previous chapters: AB’ + A’B As a Boolean equivalency, this rule may be helpful in simplifying some Boolean expressions. Any expression following the AB’ + A’B form (two AND gates and an OR gate) may be replaced by a single Exclusive-OR gate.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/07%3A_Boolean_Algebra/7.06%3A_Circuit_Simplification_Examples.txt
A mathematician named DeMorgan developed a pair of important rules regarding group complementation in Boolean algebra. By group complementation, I’m referring to the complement of a group of terms, represented by a long bar over more than one variable. You should recall from the chapter on logic gates that inverting all inputs to a gate reverses that gate’s essential function from AND to OR, or vice versa, and also inverts the output. So, an OR gate with all inputs inverted (a Negative-OR gate) behaves the same as a NAND gate, and an AND gate with all inputs inverted (a Negative-AND gate) behaves the same as a NOR gate. DeMorgan’s theorems state the same equivalence in “backward” form: that inverting the output of any gate results in the same function as the opposite type of gate (AND vs. OR) with inverted inputs: A long bar extending over the term AB acts as a grouping symbol, and as such is entirely different from the product of A and B independently inverted. In other words, (AB)’ is not equal to A’B’. Because the “prime” symbol (’) cannot be stretched over two variables like a bar can, we are forced to use parentheses to make it apply to the whole term AB in the previous sentence. A bar, however, acts as its own grouping symbol when stretched over more than one variable. This has profound impact on how Boolean expressions are evaluated and reduced, as we shall see. DeMorgan’s theorem may be thought of in terms of breaking a long bar symbol. When a long bar is broken, the operation directly underneath the break changes from addition to multiplication, or vice versa, and the broken bar pieces remain over the individual variables. To illustrate: When multiple “layers” of bars exist in an expression, you may only break one bar at a time, and it is generally easier to begin simplification by breaking the longest (uppermost) bar first. To illustrate, let’s take the expression (A + (BC)’)’ and reduce it using DeMorgan’s Theorems: Following the advice of breaking the longest (uppermost) bar first, I’ll begin by breaking the bar covering the entire expression as a first step: As a result, the original circuit is reduced to a three-input AND gate with the A input inverted: You should never break more than one bar in a single step, as illustrated here: As tempting as it may be to conserve steps and break more than one bar at a time, it often leads to an incorrect result, so don’t do it! It is possible to properly reduce this expression by breaking the short bar first, rather than the long bar first: The end result is the same, but more steps are required compared to using the first method, where the longest bar was broken first. Note how in the third step we broke the long bar in two places. This is a legitimate mathematical operation, and not the same as breaking two bars in one step! The prohibition against breaking more than one bar in one step is not a prohibition against breaking a bar in more than one place. Breaking in more than one place in a single step is okay; breaking more than one bar in a single step is not. You might be wondering why parentheses were placed around the sub-expression B’ + C’, considering the fact that I just removed them in the next step. I did this to emphasize an important but easily neglected aspect of DeMorgan’s theorem. Since a long bar functions as a grouping symbol, the variables formerly grouped by a broken bar must remain grouped lest proper precedence (order of operation) be lost. In this example, it really wouldn’t matter if I forgot to put parentheses in after breaking the short bar, but in other cases it might. Consider this example, starting with a different expression: As you can see, maintaining the grouping implied by the complementation bars for this expression is crucial to obtaining the correct answer. Let’s apply the principles of DeMorgan’s theorems to the simplification of a gate circuit: As always, our first step in simplifying this circuit must be to generate an equivalent Boolean expression. We can do this by placing a sub-expression label at the output of each gate, as the inputs become known. Here’s the first step in this process: Next, we can label the outputs of the first NOR gate and the NAND gate. When dealing with inverted-output gates, I find it easier to write an expression for the gate’s output without the final inversion, with an arrow pointing to just before the inversion bubble. Then, at the wire leading out of the gate (after the bubble), I write the full, complemented expression. This helps ensure I don’t forget a complementing bar in the sub-expression, by forcing myself to split the expression-writing task into two steps: Finally, we write an expression (or pair of expressions) for the last NOR gate: Now, we reduce this expression using the identities, properties, rules, and theorems (DeMorgan’s) of Boolean algebra: The equivalent gate circuit for this much-simplified expression is as follows: Review • DeMorgan’s Theorems describe the equivalence between gates with inverted inputs and gates with inverted outputs. Simply put, a NAND gate is equivalent to a Negative-OR gate, and a NOR gate is equivalent to a Negative-AND gate. • When “breaking” a complementation bar in a Boolean expression, the operation directly underneath the break (addition or multiplication) reverses, and the broken bar pieces remain over the respective terms. • It is often easier to approach a problem by breaking the longest (uppermost) bar before breaking any bars under it. You must never attempt to break two bars in one step! • Complementation bars function as grouping symbols. Therefore, when a bar is broken, the terms underneath it must remain grouped. Parentheses may be placed around these grouped terms as a help to avoid changing precedence.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/07%3A_Boolean_Algebra/7.08%3A_DeMorgans_Theorems.txt
In designing digital circuits, the designer often begins with a truth table describing what the circuit should do. The design task is largely to determine what type of circuit will perform the function described in the truth table. While some people seem to have a natural ability to look at a truth table and immediately envision the necessary logic gate or relay logic circuitry for the task, there are procedural techniques available for the rest of us. Here, Boolean algebra proves its utility in a most dramatic way. To illustrate this procedural method, we should begin with a realistic design problem. Suppose we were given the task of designing a flame detection circuit for a toxic waste incinerator. The intense heat of the fire is intended to neutralize the toxicity of the waste introduced into the incinerator. Such combustion-based techniques are commonly used to neutralize medical waste, which may be infected with deadly viruses or bacteria: So long as a flame is maintained in the incinerator, it is safe to inject waste into it to be neutralized. If the flame were to be extinguished, however, it would be unsafe to continue to inject waste into the combustion chamber, as it would exit the exhaust un-neutralized, and pose a health threat to anyone in close proximity to the exhaust. What we need in this system is a sure way of detecting the presence of a flame, and permitting waste to be injected only if a flame is “proven” by the flame detection system. Several different flame-detection technologies exist: optical (detection of light), thermal (detection of high temperature), and electrical conduction (detection of ionized particles in the flame path), each one with its unique advantages and disadvantages. Suppose that due to the high degree of hazard involved with potentially passing un-neutralized waste out the exhaust of this incinerator, it is decided that the flame detection system be made redundant (multiple sensors), so that failure of a single sensor does not lead to an emission of toxins out the exhaust. Each sensor comes equipped with a normally-open contact (open if no flame, closed if flame detected) which we will use to activate the inputs of a logic system: Our task, now, is to design the circuitry of the logic system to open the waste valve if and only if there is good flame proven by the sensors. First, though, we must decide what the logical behavior of this control system should be. Do we want the valve to be opened if only one out of the three sensors detects flame? Probably not, because this would defeat the purpose of having multiple sensors. If any one of the sensors were to fail in such a way as to falsely indicate the presence of flame when there was none, a logic system based on the principle of “any one out of three sensors showing flame” would give the same output that a single-sensor system would with the same failure. A far better solution would be to design the system so that the valve is commanded to open if and only if all three sensors detect a good flame. This way, any single, failed sensor falsely showing flame could not keep the valve in the open position; rather, it would require all three sensors to be failed in the same manner—a highly improbable scenario—for this dangerous condition to occur. Thus, our truth table would look like this: It does not require much insight to realize that this functionality could be generated with a three-input AND gate: the output of the circuit will be “high” if and only if input A AND input B AND input C are all “high:” If using relay circuitry, we could create this AND function by wiring three relay contacts in series, or simply by wiring the three sensor contacts in series, so that the only way electrical power could be sent to open the waste valve is if all three sensors indicate flame: While this design strategy maximizes safety, it makes the system very susceptible to sensor failures of the opposite kind. Suppose that one of the three sensors were to fail in such a way that it indicated no flame when there really was a good flame in the incinerator’s combustion chamber. That single failure would shut off the waste valve unnecessarily, resulting in lost production time and wasted fuel (feeding a fire that wasn’t being used to incinerate waste). It would be nice to have a logic system that allowed for this kind of failure without shutting the system down unnecessarily, yet still provide sensor redundancy so as to maintain safety in the event that any single sensor failed “high” (showing flame at all times, whether or not there was one to detect). A strategy that would meet both needs would be a “two out of three” sensor logic, whereby the waste valve is opened if at least two out of the three sensors show good flame. The truth table for such a system would look like this: Here, it is not necessarily obvious what kind of logic circuit would satisfy the truth table. However, a simple method for designing such a circuit is found in a standard form of Boolean expression called the Sum-Of-Products, or SOP, form. As you might suspect, a Sum-Of-Products Boolean expression is literally a set of Boolean terms added (summed) together, each term being a multiplicative (product) combination of Boolean variables. An example of an SOP expression would be something like this: ABC + BC + DF, the sum of products “ABC,” “BC,” and “DF.” Sum-Of-Products expressions are easy to generate from truth tables. All we have to do is examine the truth table for any rows where the output is “high” (1), and write a Boolean product term that would equal a value of 1 given those input conditions. For instance, in the fourth row down in the truth table for our two-out-of-three logic system, where A=0, B=1, and C=1, the product term would be A’BC, since that term would have a value of 1 if and only if A=0, B=1, and C=1: Three other rows of the truth table have an output value of 1, so those rows also need Boolean product expressions to represent them: Finally, we join these four Boolean product expressions together by addition, to create a single Boolean expression describing the truth table as a whole: Now that we have a Boolean Sum-Of-Products expression for the truth table’s function, we can easily design a logic gate or relay logic circuit based on that expression: Unfortunately, both of these circuits are quite complex, and could benefit from simplification. Using Boolean algebra techniques, the expression may be significantly simplified: As a result of the simplification, we can now build much simpler logic circuits performing the same function, in either gate or relay form: Either one of these circuits will adequately perform the task of operating the incinerator waste valve based on a flame verification from two out of the three flame sensors. At minimum, this is what we need to have a safe incinerator system. We can, however, extend the functionality of the system by adding to it logic circuitry designed to detect if any one of the sensors does not agree with the other two. If all three sensors are operating properly, they should detect flame with equal accuracy. Thus, they should either all register “low” (000: no flame) or all register “high” (111: good flame). Any other output combination (001, 010, 011, 100, 101, or 110) constitutes a disagreement between sensors, and may therefore serve as an indicator of a potential sensor failure. If we added circuitry to detect any one of the six “sensor disagreement” conditions, we could use the output of that circuitry to activate an alarm. Whoever is monitoring the incinerator would then exercise judgment in either continuing to operate with a possible failed sensor (inputs: 011, 101, or 110), or shut the incinerator down to be absolutely safe. Also, if the incinerator is shut down (no flame), and one or more of the sensors still indicates flame (001, 010, 011, 100, 101, or 110) while the other(s) indicate(s) no flame, it will be known that a definite sensor problem exists. The first step in designing this “sensor disagreement” detection circuit is to write a truth table describing its behavior. Since we already have a truth table describing the output of the “good flame” logic circuit, we can simply add another output column to the table to represent the second circuit, and make a table representing the entire logic system: While it is possible to generate a Sum-Of-Products expression for this new truth table column, it would require six terms, of three variables each! Such a Boolean expression would require many steps to simplify, with a large potential for making algebraic errors: An alternative to generating a Sum-Of-Products expression to account for all the “high” (1) output conditions in the truth table is to generate a Product-Of-Sums, or POS, expression, to account for all the “low” (0) output conditions instead. Being that there are much fewer instances of a “low” output in the last truth table column, the resulting Product-Of-Sums expression should contain fewer terms. As its name suggests, a Product-Of-Sums expression is a set of added terms (sums), which are multiplied (product) together. An example of a POS expression would be (A + B)(C + D), the product of the sums “A + B” and “C + D”. To begin, we identify which rows in the last truth table column have “low” (0) outputs, and write a Boolean sum term that would equal 0 for that row’s input conditions. For instance, in the first row of the truth table, where A=0, B=0, and C=0, the sum term would be (A + B + C), since that term would have a value of 0 if and only if A=0, B=0, and C=0: Only one other row in the last truth table column has a “low” (0) output, so all we need is one more sum term to complete our Product-Of-Sums expression. This last sum term represents a 0 output for an input condition of A=1, B=1 and C=1. Therefore, the term must be written as (A’ + B’+ C’), because only the sum of the complemented input variables would equal 0 for that condition only: The completed Product-Of-Sums expression, of course, is the multiplicative combination of these two sum terms: Whereas a Sum-Of-Products expression could be implemented in the form of a set of AND gates with their outputs connecting to a single OR gate, a Product-Of-Sums expression can be implemented as a set of OR gates feeding into a single AND gate: Correspondingly, whereas a Sum-Of-Products expression could be implemented as a parallel collection of series-connected relay contacts, a Product-Of-Sums expression can be implemented as a series collection of parallel-connected relay contacts: The previous two circuits represent different versions of the “sensor disagreement” logic circuit only, not the “good flame” detection circuit(s). The entire logic system would be the combination of both “good flame” and “sensor disagreement” circuits, shown on the same diagram. Implemented in a Programmable Logic Controller (PLC), the entire logic system might resemble something like this: As you can see, both the Sum-Of-Products and Products-Of-Sums standard Boolean forms are powerful tools when applied to truth tables. They allow us to derive a Boolean expression—and ultimately, an actual logic circuit—from nothing but a truth table, which is a written specification for what we want a logic circuit to do. To be able to go from a written specification to an actual circuit using simple, deterministic procedures means that it is possible to automate the design process for a digital circuit. In other words, a computer could be programmed to design a custom logic circuit from a truth table specification! The steps to take from a truth table to the final circuit are so unambiguous and direct that it requires little, if any, creativity or other original thought to execute them. Review • Sum-Of-Products, or SOP, Boolean expressions may be generated from truth tables quite easily, by determining which rows of the table have an output of 1, writing one product term for each row, and finally summing all the product terms. This creates a Boolean expression representing the truth table as a whole. • Sum-Of-Products expressions lend themselves well to implementation as a set of AND gates (products) feeding into a single OR gate (sum). • Product-Of-Sums, or POS, Boolean expressions may also be generated from truth tables quite easily, by determining which rows of the table have an output of 0, writing one sum term for each row, and finally multiplying all the sum terms. This creates a Boolean expression representing the truth table as a whole. • Product-Of-Sums expressions lend themselves well to implementation as a set of OR gates (sums) feeding into a single AND gate (product).
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/07%3A_Boolean_Algebra/7.09%3A_Converting_Truth_Tables_into_Boolean_Expressions.txt
Why learn about Karnaugh maps? The Karnaugh map, like Boolean algebra, is a simplification tool applicable to digital logic. See the “Toxic waste incinerator” in the Boolean algebra chapter for an example of Boolean simplification of digital logic. The Karnaugh Map will simplify logic faster and more easily in most cases. Boolean simplification is actually faster than the Karnaugh map for a task involving two or fewer Boolean variables. It is still quite usable at three variables, but a bit slower. At four input variables, Boolean algebra becomes tedious. Karnaugh maps are both faster and easier. Karnaugh maps work well for up to six input variables, are usable for up to eight variables. For more than six to eight variables, simplification should be by CAD (computer automated design). In theory any of the three methods will work. However, as a practical matter, the above guidelines work well. We would not normally resort to computer automation to simplify a three input logic block. We could sooner solve the problem with pencil and paper. However, if we had seven of these problems to solve, say for a BCD (Binary Coded Decimal) to seven segment decoder, we might want to automate the process. A BCD to seven segment decoder generates the logic signals to drive a seven segment LED (light emitting diode) display. Examples of computer automated design languages for simplification of logic are PALASM, ABEL, CUPL, Verilog, and VHDL. These programs accept a hardware descriptor language input file which is based on Boolean equations and produce an output file describing a reduced (or simplified) Boolean solution. We will not require such tools in this chapter. Let’s move on to Venn diagrams as an introduction to Karnaugh maps. 8.02: Venn Diagrams and Sets Mathematicians use Venn diagrams to show the logical relationships of sets (collections of objects) to one another. Perhaps you have already seen Venn diagrams in your algebra or other mathematics studies. If you have, you may remember overlapping circles and the union and intersection of sets. We will review the overlapping circles of the Venn diagram. We will adopt the terms OR and AND instead of union and intersection since that is the terminology used in digital electronics. The Venn diagram bridges the Boolean algebra from a previous chapter to the Karnaugh Map. We will relate what you already know about Boolean algebra to Venn diagrams, then transition to Karnaugh maps. A set is a collection of objects out of a universe as shown below. The members of the set are the objects contained within the set. The members of the set usually have something in common; though, this is not a requirement. Out of the universe of real numbers, for example, the set of all positive integers {1,2,3…} is a set. The set {3,4,5} is an example of a smaller set, or subset of the set of all positive integers. Another example is the set of all males out of the universe of college students. Can you think of some more examples of sets? Above left, we have a Venn diagram showing the set A in the circle within the universe U, the rectangular area. If everything inside the circle is A, then anything outside of the circle is not A. Thus, above center, we label the rectangular area outside of the circle A as A-not instead of U. We show B and B-not in a similar manner. What happens if both A and B are contained within the same universe? We show four possibilities. Let’s take a closer look at each of the the four possibilities as shown above. The first example shows that set A and set B have nothing in common according to the Venn diagram. There is no overlap between the A and B circular hatched regions. For example, suppose that sets A and B contain the following members: None of the members of set A are contained within set B, nor are any of the members of B contained within A. Thus, there is no overlap of the circles. In the second example in the above Venn diagram, Set A is totally contained within set B How can we explain this situation? Suppose that sets A and B contain the following members: All members of set A are also members of set B. Therefore, set A is a subset of Set B. Since all members of set A are members of set B, set A is drawn fully within the boundary of set B. There is a fifth case, not shown, with the four examples. Hint: it is similar to the last (fourth) example. Draw a Venn diagram for this fifth case. The third example above shows perfect overlap between set A and set B. It looks like both sets contain the same identical members. Suppose that sets A and B contain the following: Therefore, Sets And B are identically equal because they both have the same identical members. The A and B regions within the corresponding Venn diagram above overlap completely. If there is any doubt about what the above patterns represent, refer to any figure above or below to be sure of what the circular regions looked like before they were overlapped. The fourth example above shows that there is something in common between set A and set B in the overlapping region. For example, we arbitrarily select the following sets to illustrate our point: Set A and Set B both have the elements 3 and 4 in common These elements are the reason for the overlap in the center common to A and B. We need to take a closer look at this situation.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/08%3A_Karnaugh_Mapping/8.01%3A_Introduction_to_Karnaugh_Mapping.txt
The fourth example has A partially overlapping B. Though, we will first look at the whole of all hatched area below, then later only the overlapping region. Let’s assign some Boolean expressions to the regions above as shown below. Below left there is a red horizontal hatched area for A. There is a blue vertical hatched area for B. If we look at the whole area of both, regardless of the hatch style, the sum total of all hatched areas, we get the illustration above right which corresponds to the inclusive OR function of A, B. The Boolean expression is A+B. This is shown by the 45o hatched area. Anything outside of the hatched area corresponds to (A+B)-not as shown above. Let’s move on to next part of the fourth example The other way of looking at a Venn diagram with overlapping circles is to look at just the part common to both A and B, the double hatched area below left. The Boolean expression for this common area corresponding to the AND function is AB as shown below right. Note that everything outside of double hatched AB is AB-not. Note that some of the members of A, above, are members of (AB)’. Some of the members of B are members of (AB)’. But, none of the members of (AB)’ are within the doubly hatched area AB. We have repeated the second example above left. Your fifth example, which you previously sketched, is provided above right for comparison. Later we will find the occasional element, or group of elements, totally contained within another group in a Karnaugh map. Next, we show the development of a Boolean expression involving a complemented variable below. Example: (above) Show a Venn diagram for A’B (A-not AND B). Solution: Starting above top left we have red horizontal shaded A’ (A-not), then, top right, B. Next, lower left, we form the AND function A’B by overlapping the two previous regions. Most people would use this as the answer to the example posed. However, only the double hatched A’B is shown far right for clarity. The expression A’Bis the region where both A’ and B overlap. The clear region outside of A’B is (A’B)’, which was not part of the posed example. Let’s try something similar with the Boolean OR function. Example: Find B’+A Solution: Above right we start out with B which is complemented to B’. Finally we overlay A on top of B’. Since we are interested in forming the OR function, we will be looking for all hatched area regardless of hatch style. Thus, A+B’ is all hatched area above right. It is shown as a single hatch region below left for clarity. Example: Find (A+B’)’ Solution: The green 45o A+B’ hatched area was the result of the previous example. Moving on to a to,(A+B’)’ ,the present example, above left, let us find the complement of A+B’, which is the white clear area above left corresponding to (A+B’)’. Note that we have repeated, at right, the AB’ double hatched result from a previous example for comparison to our result. The regions corresponding to (A+B’)’ and AB’ above left and right respectively are identical. This can be proven with DeMorgan’s theorem and double negation. This brings up a point. Venn diagrams don’t actually prove anything. Boolean algebra is needed for formal proofs. However, Venn diagrams can be used for verification and visualization. We have verified and visualized DeMorgan’s theorem with a Venn diagram. Example: What does the Boolean expression A’+B’ look like on a Venn Diagram? Solution: above figure Start out with red horizontal hatched A’ and blue vertical hatched B’ above. Superimpose the diagrams as shown. We can still see the A’ red horizontal hatch superimposed on the other hatch. It also fills in what used to be part of the B (B-true) circle, but only that part of the B open circle not common to the A open circle. If we only look at the B’ blue vertical hatch, it fills that part of the open A circle not common to B. Any region with any hatch at all, regardless of type, corresponds to A’+B’. That is, everything but the open white space in the center. Example: What does the Boolean expression (A’+B’)’ look like on a Venn Diagram? Solution: above figure, lower left Looking at the white open space in the center, it is everything NOT in the previous solution of A’+B’, which is (A’+B’)’. Example: Show that (A’+B’)’ = AB Solution: below figure, lower left We previously showed on the above right diagram that the white open region is (A’+B’)’. On an earlier example we showed a doubly hatched region at the intersection (overlay) of AB. This is the left and middle figures repeated here. Comparing the two Venn diagrams, we see that this open region , (A’+B’)’, is the same as the doubly hatched region AB (A AND B). We can also prove that (A’+B’)’=AB by DeMorgan’s theorem and double negation as shown above. We show a three variable Venn diagram above with regions A (red horizontal), B (blue vertical), and, C(green 45o). In the very center note that all three regions overlap representing Boolean expression ABC. There is also a larger petal shaped region where A and B overlap corresponding to Boolean expression AB. In a similar manner A and C overlap producing Boolean expression AC. And B and C overlap producing Boolean expression BC. Looking at the size of regions described by AND expressions above, we see that region size varies with the number of variables in the associated AND expression. • A, 1-variable is a large circular region. • AB, 2-variable is a smaller petal shaped region. • ABC, 3-variable is the smallest region. • The more variables in the AND term, the smaller the region.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/08%3A_Karnaugh_Mapping/8.03%3A_Boolean_Relationships_on_Venn_Diagrams.txt
Starting with circle A in a rectangular A’ universe in figure (a) below, we morph a Venn diagram into almost a Karnaugh map. We expand circle A at (b) and (c), conform to the rectangular A’ universe at (d), and change A to a rectangle at (e). Anything left outside of A is A’ . We assign a rectangle to A’ at (f). Also, we do not use shading in Karnaugh maps. What we have so far resembles a 1-variable Karnaugh map, but is of little utility. We need multiple variables. Figure (a) above is the same as the previous Venn diagram showing A and A’ above except that the labels A and A’ are above the diagram instead of inside the respective regions. Imagine that we have go through a process similar to figures (a-f) to get a “square Venn diagram” for B and B’ as we show in middle figure (b). We will now superimpose the diagrams in Figures (a) and (b) to get the result at (c), just like we have been doing for Venn diagrams. The reason we do this is so that we may observe that which may be common to two overlapping regions—say where A overlaps B. The lower right cell in figure (c) corresponds to ABwhere A overlaps B. We don’t waste time drawing a Karnaugh map like (c) above, sketching a simplified version as above left instead. The column of two cells under A’ is understood to be associated with A’, and the heading A is associated with the column of cells under it. The row headed by B’ is associated with the cells to the right of it. In a similar manner B is associated with the cells to the right of it. For the sake of simplicity, we do not delineate the various regions as clearly as with Venn diagrams. The Karnaugh map above right is an alternate form used in most texts. The names of the variables are listed next to the diagonal line. The A above the diagonal indicates that the variable A (and A’) is assigned to the columns. The 0 is a substitute for A’, and the 1 substitutes for A. The B below the diagonal is associated with the rows: 0 for B’, and 1 for B Example: Mark the cell corresponding to the Boolean expression AB in the Karnaugh map above with a 1 Solution: Shade or circle the region corresponding to A. Then, shade or enclose the region corresponding to B. The overlap of the two regions is AB. Place a 1 in this cell. We do not necessarily enclose the A and B regions as at above left. We develop a 3-variable Karnaugh map above, starting with Venn diagram like regions. The universe (inside the black rectangle) is split into two narrow narrow rectangular regions for A’ and A. The variables B’and B divide the universe into two square regions. C occupies a square region in the middle of the rectangle, with C’ split into two vertical rectangles on each side of the C square. In the final figure, we superimpose all three variables, attempting to clearly label the various regions. The regions are less obvious without color printing, more obvious when compared to the other three figures. This 3-variable K-Map (Karnaugh map) has 23 = 8 cells, the small squares within the map. Each individual cell is uniquely identified by the three Boolean Variables (A, B, C). For example, ABC’ uniquely selects the lower right most cell(*), A’B’C’ selects the upper left most cell (x). We don’t normally label the Karnaugh map as shown above left. Though this figure clearly shows map coverage by single boolean variables of a 4-cell region. Karnaugh maps are labeled like the illustration at right. Each cell is still uniquely identified by a 3-variable product term, a Boolean AND expression. Take, for example, ABC’ following the A row across to the right and the BC’ column down, both intersecting at the lower right cell ABC’. See (*) above figure. The above two different forms of a 3-variable Karnaugh map are equivalent, and is the final form that it takes. The version at right is a bit easier to use, since we do not have to write down so many boolean alphabetic headers and complement bars, just 1s and 0s Use the form of map on the right and look for the the one at left in some texts. The column headers on the left B’C’, B’C, BC, BC’ are equivalent to 00, 01, 11, 10 on the right. The row headers A, A’ are equivalent to 0, 1 on the right map.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/08%3A_Karnaugh_Mapping/8.04%3A_Making_a_Venn_Diagram_Look_Like_a_Karnaugh_Map.txt
Who Developed the Karnaugh Map? Maurice Karnaugh, a telecommunications engineer, developed the Karnaugh map at Bell Labs in 1953 while designing digital logic based telephone switching circuits. The Use of Karnaugh Map Now that we have developed the Karnaugh map with the aid of Venn diagrams, let’s put it to use. Karnaugh maps reduce logic functions more quickly and easily compared to Boolean algebra. By reduce we mean simplify, reducing the number of gates and inputs. We like to simplify logic to a lowest cost form to save costs by elimination of components. We define lowest cost as being the lowest number of gates with the lowest number of inputs per gate. Given a choice, most students do logic simplification with Karnaugh maps rather than Boolean algebra once they learn this tool. We show five individual items above, which are just different ways of representing the same thing: an arbitrary 2-input digital logic function. First is relay ladder logic, then logic gates, a truth table, a Karnaugh map, and a Boolean equation. The point is that any of these are equivalent. Two inputs A and B can take on values of either 0 or 1, high or low, open or closed, True or False, as the case may be. There are 22 = 4 combinations of inputs producing an output. This is applicable to all five examples. These four outputs may be observed on a lamp in the relay ladder logic, on a logic probe on the gate diagram. These outputs may be recorded in the truth table, or in the Karnaugh map. Look at the Karnaugh map as being a rearranged truth table. The Output of the Boolean equation may be computed by the laws of Boolean algebra and transfered to the truth table or Karnaugh map. Which of the five equivalent logic descriptions should we use? The one which is most useful for the task to be accomplished. The outputs of a truth table correspond on a one-to-one basis to Karnaugh map entries. Starting at the top of the truth table, the A=0, B=0 inputs produce an output α. Note that this same output α is found in the Karnaugh map at the A=0, B=0 cell address, upper left corner of K-map where the A=0 row and B=0 column intersect. The other truth table outputs β, χ, δ from inputs AB=01, 10, 11 are found at corresponding K-map locations. Below, we show the adjacent 2-cell regions in the 2-variable K-map with the aid of previous rectangular Venn diagram like Boolean regions. Cells α and χ are adjacent in the K-map as ellipses in the left most K-map below. Referring to the previous truth table, this is not the case. There is another truth table entry (β) between them. Which brings us to the whole point of the organizing the K-map into a square array, cells with any Boolean variables in common need to be close to one another so as to present a pattern that jumps out at us. For cells α and χ they have the Boolean variable B’ in common. We know this because B=0 (same as B’) for the column above cells α and χ. Compare this to the square Venn diagram above the K-map. A similar line of reasoning shows that β and δ have Boolean B (B=1) in common. Then, α and β have Boolean A’ (A=0) in common. Finally, χ and δ have Boolean A (A=1) in common. Compare the last two maps to the middle square Venn diagram. To summarize, we are looking for commonality of Boolean variables among cells. The Karnaugh map is organized so that we may see that commonality. Let’s try some examples. Example: Transfer the contents of the truth table to the Karnaugh map above. Solution: The truth table contains two 1s. the K- map must have both of them. locate the first 1 in the 2nd row of the truth table above. • note the truth table AB address • locate the cell in the K-map having the same address • place a 1 in that cell Repeat the process for the 1 in the last line of the truth table. Example: For the Karnaugh map in the above problem, write the Boolean expression. Solution is below. Solution: Look for adjacent cells, that is, above or to the side of a cell. Diagonal cells are not adjacent. Adjacent cells will have one or more Boolean variables in common. • Group (circle) the two 1s in the column • Find the variable(s) top and/or side which are the same for the group, Write this as the Boolean result. It is B in our case. • Ignore variable(s) which are not the same for a cell group. In our case A varies, is both 1 and 0, ignore Boolean A. • Ignore any variable not associated with cells containing 1s. B’ has no ones under it. Ignore B’ • Result Out = B This might be easier to see by comparing to the Venn diagrams to the right, specifically the B column. Example: Write the Boolean expression for the Karnaugh map below. Solution: (above) • Group (circle) the two 1’s in the row • Find the variable(s) which are the same for the group, Out = A’ Example: For the Truth table below, transfer the outputs to the Karnaugh, then write the Boolean expression for the result. Solution: Transfer the 1s from the locations in the Truth table to the corresponding locations in the K-map. • Group (circle) the two 1’s in the column under B=1 • Group (circle) the two 1’s in the row right of A=1 • Write product term for first group = B • Write product term for second group = A • Write Sum-Of-Products of above two terms Output = A+B The solution of the K-map in the middle is the simplest or lowest cost solution. A less desirable solution is at far right. After grouping the two 1s, we make the mistake of forming a group of 1-cell. The reason that this is not desirable is that: • The single cell has a product term of AB’ • The corresponding solution is Output = AB’ + B • This is not the simplest solution The way to pick up this single 1 is to form a group of two with the 1 to the right of it as shown in the lower line of the middle K-map, even though this 1 has already been included in the column group (B). We are allowed to re-use cells in order to form larger groups. In fact, it is desirable because it leads to a simpler result. We need to point out that either of the above solutions, Output or Wrong Output, are logically correct. Both circuits yield the same output. It is a matter of the former circuit being the lowest cost solution. Example: Fill in the Karnaugh map for the Boolean expression below, then write the Boolean expression for the result. Solution: (above) The Boolean expression has three product terms. There will be a 1 entered for each product term. Though, in general, the number of 1s per product term varies with the number of variables in the product term compared to the size of the K-map. The product term is the address of the cell where the 1 is entered. The first product term, A’B, corresponds to the 01 cell in the map. A 1 is entered in this cell. The other two P-terms are entered for a total of three 1s Next, proceed with grouping and extracting the simplified result as in the previous truth table problem. Example: Simplify the logic diagram below. Solution: (Figure below) • Write the Boolean expression for the original logic diagram as shown below • Transfer the product terms to the Karnaugh map • Form groups of cells as in previous examples • Write Boolean expression for groups as in previous examples • Draw simplified logic diagram Example: Simplify the logic diagram below. Solution: • Write the Boolean expression for the original logic diagram shown above • Transfer the product terms to the Karnaugh map. • It is not possible to form groups. • No simplification is possible; leave it as it is. No logic simplification is possible for the above diagram. This sometimes happens. Neither the methods of Karnaugh maps nor Boolean algebra can simplify this logic further. We show an Exclusive-OR schematic symbol above; however, this is not a logical simplification. It just makes a schematic diagram look nicer. Since it is not possible to simplify the Exclusive-OR logic and it is widely used, it is provided by manufacturers as a basic integrated circuit (7486).
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/08%3A_Karnaugh_Mapping/8.05%3A_Karnaugh_Maps%2C_Truth_Tables%2C_and_Boolean_Expressions.txt
The logic simplification examples that we have done so far could have been performed with Boolean algebra about as quickly. Real world logic simplification problems call for larger Karnaugh maps so that we may do serious work. We will work some contrived examples in this section, leaving most of the real world applications for the Combinatorial Logic chapter. By contrived, we mean examples which illustrate techniques. This approach will develop the tools we need to transition to the more complex applications in the Combinatorial Logic chapter. Karnaugh Maps and Gray Code Sequence We show our previously developed Karnaugh map. We will use the form on the right. Note the sequence of numbers across the top of the map. It is not in binary sequence which would be 00, 01, 10, 11. It is 00, 01, 11 10, which is Gray code sequence. Gray code sequence only changes one binary bit as we go from one number to the next in the sequence, unlike binary. That means that adjacent cells will only vary by one bit, or Boolean variable. This is what we need to organize the outputs of a logic function so that we may view commonality. Moreover, the column and row headings must be in Gray code order, or the map will not work as a Karnaugh map. Cells sharing common Boolean variables would no longer be adjacent, nor show visual patterns. Adjacent cells vary by only one bit because a Gray code sequence varies by only one bit. Generating Gray Code If we sketch our own Karnaugh maps, we need to generate Gray code for any size map that we may use. This is how we generate Gray code of any size. Note that the Gray code sequence, above right, only varies by one bit as we go down the list, or bottom to top up the list. This property of Gray code is often useful for digital electronics in general. In particular, it is applicable to Karnaugh maps. Examples of Simplification with Karnaugh Maps Let us move on to some examples of simplification with 3-variable Karnaugh maps. We show how to map the product terms of the unsimplified logic to the K-map. We illustrate how to identify groups of adjacent cells which leads to a Sum-of-Products simplification of the digital logic. Above we, place the 1’s in the K-map for each of the product terms, identify a group of two, then write a p-term (product term) for the sole group as our simplified result. Mapping the four product terms above yields a group of four covered by Boolean A’ Mapping the four p-terms yields a group of four, which is covered by one variable C. After mapping the six p-terms above, identify the upper group of four, pick up the lower two cells as a group of four by sharing the two with two more from the other group. Covering these two with a group of four gives a simpler result. Since there are two groups, there will be two p-terms in the Sum-of-Products result A’+B The two product terms above form one group of two and simplifies to BC Mapping the four p-terms yields a single group of four, which is B Mapping the four p-terms above yields a group of four. Visualize the group of four by rolling up the ends of the map to form a cylinder, then the cells are adjacent. We normally mark the group of four as above left. Out of the variables A, B, C, there is a common variable: C’. C’ is a 0 overall four cells. The final result is C’. The six cells above from the unsimplified equation can be organized into two groups of four. These two groups should give us two p-terms in our simplified result of A’ + C’. Simplifying Boolean Equations with Karnaugh Maps Below, we revisit the toxic waste incinerator from the Boolean algebra chapter. See Boolean algebra chapter for details on this example. We will simplify the logic using a Karnaugh map. The Boolean equation for the output has four product terms. Map four 1’s corresponding to the p-terms. Forming groups of cells, we have three groups of two. There will be three p-terms in the simplified result, one for each group. See Converting Truth Tables into Boolean Expressions from chapter 7 for a gate diagram of the result, which is reproduced below. Below we repeat the Boolean algebra simplification of the toxic waste incinerator for comparison. Below we repeat the Toxic waste incinerator Karnaugh map solution for comparison to the above Boolean algebra simplification. This case illustrates why the Karnaugh map is widely used for logic simplification. The Karnaugh map method certainly looks easier than the previous pages of Boolean algebra.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/08%3A_Karnaugh_Mapping/8.06%3A_Logic_Simplification_With_Karnaugh_Maps.txt
Knowing how to generate Gray code should allow us to build larger maps. Actually, all we need to do is look at the left to right sequence across the top of the 3-variable map, and copy it down the left side of the 4-variable map. See below. Reductions of 4 Variable K Maps The following four variable Karnaugh maps illustrate the reduction of Boolean expressions too tedious for Boolean algebra. Reductions could be done with Boolean algebra. However, the Karnaugh map is faster and easier, especially if there are many logic reductions to do. The above Boolean expression has seven product terms. They are mapped top to bottom and left to right on the K-map above. For example, the first P-term A’B’CD is the first row, 3rd cell, corresponding to map location A=0, B=0, C=1, D=1. The other product terms are placed in a similar manner. Encircling the largest groups possible, two groups of four are shown above. The dashed horizontal group corresponds to the simplified product term AB. The vertical group corresponds to Boolean CD. Since there are two groups, there will be two product terms in the Sum-Of-Products result of Out=AB+CD. Fold up the corners of the map below like it is a napkin to make the four cells physically adjacent. The four cells above are a group of four because they all have the Boolean variables B’ and D’ in common. In other words, B=0 for the four cells, and D=0 for the four cells. The other variables (A, C) are 0 in some cases, 1 in other cases with respect to the four corner cells. Thus, these variables (A, C) are not involved with this group of four. This single group comes out of the map as one product term for the simplified result: Out=B’D’ For the K-map below, roll the top and bottom edges into a cylinder forming eight adjacent cells. The above group of eight has one Boolean variable in common: B=0. Therefore, the one group of eight is covered by one p-term: B’. The original eight-term Boolean expression simplifies to Out=B’ P-Terms in 4 Variable K Maps The Boolean expression below has nine p-terms, three of which have three Booleans instead of four. The difference is that while four Boolean variable product terms cover one cell, the three Boolean p-terms cover a pair of cells each. The six product terms of four Boolean variables map in the usual manner above as single cells. The three Boolean variable terms (three each) map as cell pairs, which is shown above. Note that we are mapping p-terms into the K-map, not pulling them out at this point. For the simplification, we form two groups of eight. Cells in the corners are shared with both groups. This is fine. In fact, this leads to a better solution than forming a group of eight and a group of four without sharing any cells. Final Solution is Out=B’+D’ Below we map the unsimplified Boolean expression to the Karnaugh map. Above, three of the cells form into groups of two cells. A fourth cell cannot be combined with anything, which often happens in “real world” problems. In this case, the Boolean p-term ABCD is unchanged in the simplification process. Result: Out= B’C’D’+A’B’D’+ABCD Often times there is more than one minimum cost solution to a simplification problem. Such is the case illustrated below. Both results above have four product terms of three Boolean variable each. Both are equally valid minimal cost solutions. The difference in the final solution is due to how the cells are grouped as shown above. A minimal cost solution is a valid logic design with the minimum number of gates with the minimum number of inputs. Below we map the unsimplified Boolean equation as usual and form a group of four as a first simplification step. It may not be obvious how to pick up the remaining cells. Pick up three more cells in a group of four, center above. There are still two cells remaining. the minimal cost method to pick up those is to group them with neighboring cells as groups of four as at above right. On a cautionary note, do not attempt to form groups of three. Groupings must be powers of 2, that is, 1, 2, 4, 8 ... Below we have another example of two possible minimal cost solutions. Start by forming a couple of groups of four after mapping the cells. The two solutions depend on whether the single remaining cell is grouped with the first or the second group of four as a group of two cells. That cell either comes out as either ABC’ or ABD, your choice. Either way, this cell is covered by either Boolean product term. Final results are shown above. Below we have an example of a simplification using the Karnaugh map at left or Boolean algebra at right. Plot C’ on the map as the area of all cells covered by address C=0, the 8-cells on the left of the map. Then, plot the single ABCD cell. That single cell forms a group of 2-cell as shown, which simplifies to P-term ABD, for an end result of Out = C’ + ABD. This (above) is a rare example of a four-variable problem that can be reduced with Boolean algebra without a lot of work, assuming that you remember the theorems.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/08%3A_Karnaugh_Mapping/8.07%3A_Larger_4-variable_Karnaugh_Maps.txt
So far we have been finding Sum-Of-Product (SOP) solutions to logic reduction problems. For each of these SOP solutions, there is also a Product-Of-Sums solution (POS), which could be more useful, depending on the application. Before working a Product-Of-Sums solution, we need to introduce some new terminology. The procedure below for mapping product terms is not new to this chapter. We just want to establish a formal procedure for minterms for comparison to the new procedure for maxterms. A minterm is a Boolean expression resulting in 1 for the output of a single cell, and 0s for all other cells in a Karnaugh map, or truth table. If a minterm has a single 1 and the remaining cells as 0s, it would appear to cover a minimum area of 1s. The illustration above left shows the minterm ABC, a single product term, as a single 1 in a map that is otherwise 0s. We have not shown the 0s in our Karnaugh maps up to this point, as it is customary to omit them unless specifically needed. Another minterm A’BC’ is shown above right. The point to review is that the address of the cell corresponds directly to the minterm being mapped. That is, the cell 111 corresponds to the minterm ABC above left. Above right we see that the minterm A’BC’corresponds directly to the cell 010. A Boolean expression or map may have multiple minterms. Referring to the above figure, Let’s summarize the procedure for placing a minterm in a K-map: • Identify the minterm (product term) term to be mapped. • Write the corresponding binary numeric value. • Use binary value as an address to place a 1 in the K-map • Repeat steps for other minterms (P-terms within a Sum-Of-Products). A Boolean expression will more often than not consist of multiple minterms corresponding to multiple cells in a Karnaugh map as shown above. The multiple minterms in this map are the individual minterms which we examined in the previous figure above. The point we review for reference is that the 1s come out of the K-map as a binary cell address which converts directly to one or more product terms. By directly we mean that a 0 corresponds to a complemented variable, and a 1 corresponds to a true variable. Example: 010converts directly to A’BC’. There was no reduction in this example. Though, we do have a Sum-Of-Products result from the minterms. Referring to the above figure, Let’s summarize the procedure for writing the Sum-Of-Products reduced Boolean equation from a K-map: • Form largest groups of 1s possible covering all minterms. Groups must be a power of 2. • Write binary numeric value for groups. • Convert binary value to a product term. • Repeat steps for other groups. Each group yields a p-terms within a Sum-Of-Products. Nothing new so far, a formal procedure has been written down for dealing with minterms. This serves as a pattern for dealing with maxterms. Next we attack the Boolean function which is 0 for a single cell and 1s for all others. A maxterm is a Boolean expression resulting in a 0 for the output of a single cell expression, and 1s for all other cells in the Karnaugh map, or truth table. The illustration above left shows the maxterm (A+B+C), a single sum term, as a single 0 in a map that is otherwise 1s. If a maxterm has a single 0 and the remaining cells as 1s, it would appear to cover a maximum area of 1s. There are some differences now that we are dealing with something new, maxterms. The maxterm is a 0, not a 1 in the Karnaugh map. A maxterm is a sum term, (A+B+C) in our example, not a product term. It also looks strange that (A+B+C) is mapped into the cell 000. For the equation Out=(A+B+C)=0, all three variables (A, B, C) must individually be equal to 0. Only (0+0+0)=0 will equal 0. Thus we place our sole 0 for minterm (A+B+C) in cell A,B,C=000 in the K-map, where the inputs are all0 . This is the only case which will give us a 0 for our maxterm. All other cells contain 1s because any input values other than ((0,0,0) for (A+B+C) yields 1s upon evaluation. Referring to the above figure, the procedure for placing a maxterm in the K-map is: • Identify the Sum term to be mapped. • Write corresponding binary numeric value. • Form the complement • Use the complement as an address to place a 0 in the K-map • Repeat for other maxterms (Sum terms within Product-of-Sums expression). Another maxterm A’+B’+C’ is shown above. Numeric 000 corresponds to A’+B’+C’. The complement is 111. Place a 0 for maxterm (A’+B’+C’) in this cell (1,1,1) of the K-map as shown above. Why should (A’+B’+C’) cause a 0 to be in cell 111? When A’+B’+C’ is (1’+1’+1’), all 1s in, which is (0+0+0)after taking complements, we have the only condition that will give us a 0. All the 1s are complemented to all 0s, which is 0 when ORed. A Boolean Product-Of-Sums expression or map may have multiple maxterms as shown above. Maxterm (A+B+C) yields numeric 111 which complements to 000, placing a 0 in cell (0,0,0). Maxterm (A+B+C’)yields numeric 110 which complements to 001, placing a 0 in cell (0,0,1). Now that we have the k-map setup, what we are really interested in is showing how to write a Product-Of-Sums reduction. Form the 0s into groups. That would be a group of two below. Write the binary value corresponding to the sum-term which is (0,0,X). Both A and B are 0 for the group. But, C is both 0 and 1 so we write an X as a place holder for C. Form the complement (1,1,X). Write the Sum-term (A+B) discarding the C and the X which held its’ place. In general, expect to have more sum-terms multiplied together in the Product-Of-Sums result. Though, we have a simple example here. Let’s summarize the procedure for writing the Product-Of-Sums Boolean reduction for a K-map: • Form largest groups of 0s possible, covering all maxterms. Groups must be a power of 2. • Write binary numeric value for group. • Complement binary numeric value for group. • Convert complement value to a sum-term. • Repeat steps for other groups. Each group yields a sum-term within a Product-Of-Sums result. Example: Simplify the Product-Of-Sums Boolean expression below, providing a result in POS form. Solution: Transfer the seven maxterms to the map below as 0s. Be sure to complement the input variables in finding the proper cell location. We map the 0s as they appear left to right top to bottom on the map above. We locate the last three maxterms with leader lines.. Once the cells are in place above, form groups of cells as shown below. Larger groups will give a sum-term with fewer inputs. Fewer groups will yield fewer sum-terms in the result. We have three groups, so we expect to have three sum-terms in our POS result above. The group of 4-cells yields a 2-variable sum-term. The two groups of 2-cells give us two 3-variable sum-terms. Details are shown for how we arrived at the Sum-terms above. For a group, write the binary group input address, then complement it, converting that to the Boolean sum-term. The final result is product of the three sums. Example: Simplify the Product-Of-Sums Boolean expression below, providing a result in SOP form. Solution: This looks like a repeat of the last problem. It is except that we ask for a Sum-Of-Products Solution instead of the Product-Of-Sums which we just finished. Map the maxterm 0s from the Product-Of-Sums given as in the previous problem, below left. Then fill in the implied 1s in the remaining cells of the map above right. Form groups of 1s to cover all 1s. Then write the Sum-Of-Products simplified result as in the previous section of this chapter. This is identical to a previous problem. Above we show both the Product-Of-Sums solution, from the previous example, and the Sum-Of-Products solution from the current problem for comparison. Which is the simpler solution? The POS uses 3-OR gates and 1-AND gate, while the SOP uses 3-AND gates and 1-OR gate. Both use four gates each. Taking a closer look, we count the number of gate inputs. The POS uses 8-inputs; the SOP uses 7-inputs. By the definition of minimal cost solution, the SOP solution is simpler. This is an example of a technically correct answer that is of little use in the real world. The better solution depends on complexity and the logic family being used. The SOP solution is usually better if using the TTL logic family, as NAND gates are the basic building block, which works well with SOP implementations. On the other hand, A POS solution would be acceptable when using the CMOS logic family since all sizes of NOR gates are available. The gate diagrams for both cases are shown above, Product-Of-Sums left, and Sum-Of-Products right. Below, we take a closer look at the Sum-Of-Products version of our example logic, which is repeated at left. Above all AND gates at left have been replaced by NAND gates at right.. The OR gate at the output is replaced by a NAND gate. To prove that AND-OR logic is equivalent to NAND-NAND logic, move the inverter invert bubbles at the output of the 3-NAND gates to the input of the final NAND as shown in going from above right to below left. Above right we see that the output NAND gate with inverted inputs is logically equivalent to an OR gate by DeMorgan’s theorem and double negation. This information is useful in building digital logic in a laboratory setting where TTL logic family NAND gates are more readily available in a wide variety of configurations than other types. The Procedure for constructing NAND-NAND logic, in place of AND-OR logic is as follows: • Produce a reduced Sum-Of-Products logic design. • When drawing the wiring diagram of the SOP, replace all gates (both AND and OR) with NAND gates. • Unused inputs should be tied to logic High. • In case of troubleshooting, internal nodes at the first level of NAND gate outputs do NOT match AND-OR diagram logic levels, but are inverted. Use the NAND-NAND logic diagram. Inputs and final output are identical, though. • Label any multiple packages U1, U2,.. etc. • Use data sheet to assign pin numbers to inputs and outputs of all gates. Example: Let us revisit a previous problem involving an SOP minimization. Produce a Product-Of-Sums solution. Compare the POS solution to the previous SOP. Solution: Above left we have the original problem starting with a 9-minterm Boolean unsimplified expression. Reviewing, we formed four groups of 4-cells to yield a 4-product-term SOP result, lower left. In the middle figure, above, we fill in the empty spaces with the implied 0s. The 0s form two groups of 4-cells. The solid blue group is (A’+B), the dashed red group is (C’+D). This yields two sum-terms in the Product-Of-Sums result, above right Out = (A’+B)(C’+D) Comparing the previous SOP simplification, left, to the POS simplification, right, shows that the POS is the least cost solution. The SOP uses 5-gates total, the POS uses only 3-gates. This POS solution even looks attractive when using TTL logic due to simplicity of the result. We can find AND gates and an OR gate with 2-inputs. The SOP and POS gate diagrams are shown above for our comparison problem. Given the pin-outs for the TTL logic family integrated circuit gates below, label the maxterm diagram above right with Circuit designators (U1-a, U1-b, U2-a, etc), and pin numbers. Each integrated circuit package that we use will receive a circuit designator: U1, U2, U3. To distinguish between the individual gates within the package, they are identified as a, b, c, d, etc. The 7404 hex-inverter package is U1. The individual inverters in it are are U1-a, U1-b, U1-c, etc. U2 is assigned to the 7432 quad OR gate. U3 is assigned to the 7408 quad AND gate. With reference to the pin numbers on the package diagram above, we assign pin numbers to all gate inputs and outputs on the schematic diagram below. We can now build this circuit in a laboratory setting. Or, we could design a printed circuit board for it. A printed circuit board contains copper foil “wiring” backed by a non conductive substrate of phenolic, or epoxy-fiberglass. Printed circuit boards are used to mass produce electronic circuits. Ground the inputs of unused gates. Label the previous POS solution diagram above left (third figure back) with Circuit designators and pin numbers. This will be similar to what we just did. We can find 2-input AND gates, 7408 in the previous example. However, we have trouble finding a 4-input OR gate in our TTL catalog. The only kind of gate with 4-inputs is the 7420 NAND gate shown above right. We can make the 4-input NAND gate into a 4-input OR gate by inverting the inputs to the NAND gate as shown below. So we will use the 7420 4-input NAND gate as an OR gate by inverting the inputs. We will not use discrete inverters to invert the inputs to the 7420 4-input NAND gate, but will drive it with 2-input NAND gates in place of the AND gates called for in the SOP, minterm, solution. The inversion at the output of the 2-input NAND gates supply the inversion for the 4-input OR gate. The result is shown above. It is the only practical way to actually build it with TTL gates by using NAND-NAND logic replacing AND-OR logic.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/08%3A_Karnaugh_Mapping/8.08%3A_Minterm_vs._Maxterm_Solution.txt
For reference, this section introduces the terminology used in some texts to describe the minterms and maxterms assigned to a Karnaugh map. Otherwise, there is no new material here. Σ (sigma) indicates sum and lower case “m” indicates minterms. Σm indicates sum of minterms. The following example is revisited to illustrate our point. Instead of a Boolean equation description of unsimplified logic, we list the minterms. The numbers indicate cell location, or address, within a Karnaugh map as shown below right. This is certainly a compact means of describing a list of minterms or cells in a K-map. The Sum-Of-Products solution is not affected by the new terminology. The minterms, 1s, in the map have been grouped as usual and a Sum-OF-Products solution written. Below, we show the terminology for describing a list of maxterms. Product is indicated by the Greek Π (pi), and upper case “M” indicates maxterms. ΠM indicates product of maxterms. The same example illustrates our point. The Boolean equation description of unsimplified logic, is replaced by a list of maxterms. Once again, the numbers indicate K-map cell address locations. For maxterms this is the location of 0s, as shown below. A Product-OF-Sums solution is completed in the usual manner. 8.10: Don’t Care Cells in the Karnaugh Map Up to this point we have considered logic reduction problems where the input conditions were completely specified. That is, a 3-variable truth table or Karnaugh map had 2n = 23 or 8-entries, a full table or map. It is not always necessary to fill in the complete truth table for some real-world problems. We may have a choice to not fill in the complete table. For example, when dealing with BCD (Binary Coded Decimal) numbers encoded as four bits, we may not care about any codes above the BCD range of (0, 1, 2…9). The 4-bit binary codes for the hexadecimal numbers (Ah, Bh, Ch, Eh, Fh) are not valid BCD codes. Thus, we do not have to fill in those codes at the end of a truth table, or K-map, if we do not care to. We would not normally care to fill in those codes because those codes (1010, 1011, 1100, 1101, 1110, 1111) will never exist as long as we are dealing only with BCD encoded numbers. These six invalid codes are don’t cares as far as we are concerned. That is, we do not care what output our logic circuit produces for these don’t cares. Don’t cares in a Karnaugh map, or truth table, may be either 1s or 0s, as long as we don’t care what the output is for an input condition we never expect to see. We plot these cells with an asterisk, *, among the normal 1s and 0s. When forming groups of cells, treat the don’t care cell as either a 1 or a 0, or ignore the don’t cares. This is helpful if it allows us to form a larger group than would otherwise be possible without the don’t cares. There is no requirement to group all or any of the don’t cares. Only use them in a group if it simplifies the logic. Above is an example of a logic function where the desired output is 1 for input ABC = 101 over the range from 000 to 101. We do not care what the output is for the other possible inputs (110, 111). Map those two as don’t cares. We show two solutions. The solution on the right Out = AB’C is the more complex solution since we did not use the don’t care cells. The solution in the middle, Out=AC, is less complex because we grouped a don’t care cell with the single 1 to form a group of two. The third solution, a Product-Of-Sums on the right, results from grouping a don’t care with three zeros forming a group of four 0s. This is the same, less complex, Out=AC. We have illustrated that the don’t care cells may be used as either 1s or 0s, whichever is useful. The electronics class of Lightning State College has been asked to build the lamp logic for a stationary bicycle exhibit at the local science museum. As a rider increases his pedaling speed, lamps will light on a bar graph display. No lamps will light for no motion. As speed increases, the lower lamp, L1 lights, then L1 and L2, then, L1, L2, and L3, until all lamps light at the highest speed. Once all the lamps illuminate, no further increase in speed will have any effect on the display. A small DC generator coupled to the bicycle tire outputs a voltage proportional to speed. It drives a tachometer board which limits the voltage at the high end of speed where all lamps light. No further increase in speed can increase the voltage beyond this level. This is crucial because the downstream A to D (Analog to Digital) converter puts out a 3-bit code, ABC, 23 or 8-codes, but we only have five lamps. A is the most significant bit, C the least significant bit. The lamp logic needs to respond to the six codes out of the A to D. For ABC=000, no motion, no lamps light. For the five codes (001 to 101) lamps L1, L1&L2, L1&L2&L3, up to all lamps will light, as speed, voltage, and the A to D code (ABC) increases. We do not care about the response to input codes (110, 111)because these codes will never come out of the A to D due to the limiting in the tachometer block. We need to design five logic circuits to drive the five lamps. Since, none of the lamps light for ABC=000 out of the A to D, enter a 0 in all K-maps for cell ABC=000. Since we don’t care about the never to be encountered codes (110, 111), enter asterisks into those two cells in all five K-maps. Lamp L5 will only light for code ABC=101. Enter a 1 in that cell and five 0s into the remaining empty cells of L5 K-map. L4 will light initially for code ABC=100, and will remain illuminated for any code greater, ABC=101, because all lamps below L5 will light when L5 lights. Enter 1s into cells 100 and 101 of the L4 map so that it will light for those codes. Four 0‘s fill the remaining L4 cells L3 will initially light for code ABC=011. It will also light whenever L5 and L4 illuminate. Enter three 1s into cells 011, 100, 101 for L3 map. Fill three 0s into the remaining L3 cells. L2 lights for ABC=010 and codes greater. Fill 1s into cells 010, 011, 100, 101, and two 0s in the remaining cells. The only time L1 is not lighted is for no motion. There is already a 0 in cell ABC=000. All the other five cells receive 1s. Group the 1‘s as shown above, using don’t cares whenever a larger group results. The L1 map shows three product terms, corresponding to three groups of 4-cells. We used both don’t cares in two of the groups and one don’t care on the third group. The don’t cares allowed us to form groups of four. In a similar manner, the L2 and L4 maps both produce groups of 4-cells with the aid of the don’t care cells. The L4 reduction is striking in that the L4 lamp is controlled by the most significant bit from the A to D converter, L5=A. No logic gates are required for lamp L4. In the L3 and L5 maps, single cells form groups of two with don’t care cells. In all five maps, the reduced Boolean equation is less complex than without the don’t cares. The gate diagram for the circuit is above. The outputs of the five K-map equations drive inverters. Note that the L1 OR gate is not a 3-input gate but a 2-input gate having inputs (A+B), C, outputting A+B+C The open collector inverters, 7406, are desirable for driving LEDs, though, not part of the K-map logic design. The output of an open collector gate or inverter is open circuited at the collector internal to the integrated circuit package so that all collector current may flow through an external load. An active high into any of the inverters pulls the output low, drawing current through the LED and the current limiting resistor. The LEDs would likely be part of a solid state relay driving 120VAC lamps for a museum exhibit, not shown here.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/08%3A_Karnaugh_Mapping/8.09%3A_Sum_and_Product_Notation.txt
Larger Karnaugh maps reduce larger logic designs. How large is large enough? That depends on the number of inputs, fan-ins, to the logic circuit under consideration. One of the large programmable logic companies has an answer. Altera’s own data, extracted from its library of customer designs, supports the value of heterogeneity. By examining logic cones, mapping them onto LUT-based nodes and sorting them by the number of inputs that would be best at each node, Altera found that the distribution of fan-ins was nearly flat between two and six inputs, with a nice peak at five. The answer is no more than six inputs for most all designs, and five inputs for the average logic design. The five variable Karnaugh map follows. The older version of the five variable K-map, a Gray Code map or reflection map, is shown above. The top (and side for a 6-variable map) of the map is numbered in full Gray code. The Gray code reflects about the middle of the code. This style map is found in older texts. The newer preferred style is below. The overlay version of the Karnaugh map, shown above, is simply two (four for a 6-variable map) identical maps except for the most significant bit of the 3-bit address across the top. If we look at the top of the map, we will see that the numbering is different from the previous Gray code map. If we ignore the most significant digit of the 3-digit numbers, the sequence 00, 01, 11, 10 is at the heading of both sub maps of the overlay map. The sequence of eight 3-digit numbers is not Gray code. Though the sequence of four of the least significant two bits is. Let’s put our 5-variable Karnaugh Map to use. Design a circuit which has a 5-bit binary input (A, B, C, D, E), with A being the MSB (Most Significant Bit). It must produce an output logic High for any prime number detected in the input data. We show the solution above on the older Gray code (reflection) map for reference. The prime numbers are (1,2,3,5,7,11,13,17,19,23,29,31). Plot a 1 in each corresponding cell. Then, proceed with grouping of the cells. Finish by writing the simplified result. Note that 4-cell group A’B’E consists of two pairs of cell on both sides of the mirror line. The same is true of the 2-cell group AB’DE. It is a group of 2-cells by being reflected about the mirror line. When using this version of the K-map look for mirror images in the other half of the map. Out = A’B’E + B’C’E + A’C’DE + A’CD’E + ABCE + AB’DE + A’B’C’D Below we show the more common version of the 5-variable map, the overlay map. If we compare the patterns in the two maps, some of the cells in the right half of the map are moved around since the addressing across the top of the map is different. We also need to take a different approach at spotting commonality between the two halves of the map. Overlay one half of the map atop the other half. Any overlap from the top map to the lower map is a potential group. The figure below shows that group AB’DE is composed of two stacked cells. Group A’B’E consists of two stacked pairs of cells. For the A’B’E group of 4-cells ABCDE = 00xx1 for the group. That is A,B,E are the same 001 respectively for the group. And, CD=xx that is it varies, no commonality in CD=xx for the group of 4-cells. Since ABCDE = 00xx1, the group of 4-cells is covered by A’B’XXE = A’B’E. The above 5-variable overlay map is shown stacked. An example of a six variable Karnaugh map follows. We have mentally stacked the four sub maps to see the group of 4-cells corresponding to Out = C’F’ A magnitude comparator (used to illustrate a 6-variable K-map) compares two binary numbers, indicating if they are equal, greater than, or less than each other on three respective outputs. A three bit magnitude comparator has two inputs A2A1A0 and B2B1B0 An integrated circuit magnitude comparator (7485) would actually have four inputs, But, the Karnaugh map below needs to be kept to a reasonable size. We will only solve for the A>B output. Below, a 6-variable Karnaugh map aids simplification of the logic for a 3-bit magnitude comparator. This is an overlay type of map. The binary address code across the top and down the left side of the map is not a full 3-bit Gray code. Though the 2-bit address codes of the four sub maps is Gray code. Find redundant expressions by stacking the four sub maps atop one another (shown above). There could be cells common to all four maps, though not in the example below. It does have cells common to pairs of sub maps. The A>B output above is ABC>XYZ on the map below. Where ever ABC is greater than XYZ, a 1 is plotted. In the first line ABC=000 cannot be greater than any of the values of XYZ. No 1s in this line. In the second line, ABC=001, only the first cell ABCXYZ= 001000 is ABC greater than XYZ. A single 1 is entered in the first cell of the second line. The fourth line, ABC=010, has a pair of 1s. The third line, ABC=011 has three 1s. Thus, the map is filled with 1s in any cells where ABC is greater than XXZ. In grouping cells, form groups with adjacent sub maps if possible. All but one group of 16-cells involves cells from pairs of the sub maps. Look for the following groups: • 1 group of 16-cells • 2 groups of 8-cells • 4 groups of 4-cells The group of 16-cells, AX’ occupies all of the lower right sub map; though, we don’t circle it on the figure above. One group of 8-cells is composed of a group of 4-cells in the upper sub map overlaying a similar group in the lower left map. The second group of 8-cells is composed of a similar group of 4-cells in the right sub map overlaying the same group of 4-cells in the lower left map. The four groups of 4-cells are shown on the Karnaugh map above with the associated product terms. Along with the product terms for the two groups of 8-cells and the group of 16-cells, the final Sum-Of-Products reduction is shown, all seven terms. Counting the 1s in the map, there is a total of 16+6+6=28 ones. Before the K-map logic reduction there would have been 28 product terms in our SOP output, each with 6-inputs. The Karnaugh map yielded seven product terms of four or less inputs. This is really what Karnaugh maps are all about! The wiring diagram is not shown. However, here is the parts list for the 3-bit magnitude comparator for ABC>XYZ using 4 TTL logic family parts: • 1 ea 7410 triple 3-input NAND gate AX’, ABY’, BX’Y’ • 2 ea 7420 dual 4-input NAND gate ABCZ’, ACY’Z’, BCX’Z’, CX’Y’Z’ • 1 ea 7430 8-input NAND gate for output of 7-P-terms Review • Boolean algebra, Karnaugh maps, and CAD (Computer Aided Design) are methods of logic simplification. The goal of logic simplification is a minimal cost solution. • A minimal cost solution is a valid logic reduction with the minimum number of gates with the minimum number of inputs. • Venn diagrams allow us to visualize Boolean expressions, easing the transition to Karnaugh maps. • Karnaugh map cells are organized in Gray code order so that we may visualize redundancy in Boolean expressions which results in simplification. • The more common Sum-Of-Products (Sum of Minters) expressions are implemented as AND gates (products) feeding a single OR gate (sum). • Sum-Of-Products expressions (AND-OR logic) are equivalent to a NAND-NAND implementation. All AND gates and OR gates are replaced by NAND gates. • Less often used, Product-Of-Sums expressions are implemented as OR gates (sums) feeding into a single AND gate (product). Product-Of-Sums expressions are based on the 0s, maxterms, in a Karnaugh map.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/08%3A_Karnaugh_Mapping/8.11%3A_Larger_5_and_6-variable_Karnaugh_Maps.txt
The term “combinational” comes to us from mathematics. In mathematics a combination is an unordered set, which is a formal way to say that nobody cares which order the items came in. Most games work this way, if you rolled dice one at a time and get a 2 followed by a 3 it is the same as if you had rolled a 3 followed by a 2. With combinational logic, the circuit produces the same output regardless of the order the inputs are changed. There are circuits which depend on the when the inputs change, these circuits are called sequential logic. Even though you will not find the term “sequential logic” in the chapter titles, the next several chapters will discuss sequential logic. Practical circuits will have a mix of combinational and sequential logic, with sequential logic making sure everything happens in order and combinational logic performing functions like arithmetic, logic, or conversion. You have already used combinational circuits. Each logic gate discussed previously is a combinational logic function. Let’s follow how two NAND gate works if we provide them inputs in different orders. We begin with both inputs being 0. We then set one input high. We then set the other input high. So NAND gates do not care about the order of the inputs, and you will find the same true of all the other gates covered up to this point (AND, XOR, OR, NOR, XNOR, and NOT). 9.02: Half-Adder As a first example of useful combinational logic, let’s build a device that can add two binary digits together. We can quickly calculate what the answers should be: So we well need two inputs (a and b) and two outputs. The low order output will be called Σ because it represents the sum, and the high order output will be called Cout because it represents the carry out. The truth table is Simplifying boolean equations or making some Karnaugh map will produce the same circuit shown below, but start by looking at the results. The Σ column is our familiar XOR gate, while the Cout column is the AND gate. This device is called a half-adder for reasons that will make sense in the next section. or in ladder logic 9.03: Full-Adder The half-adder is extremely useful until you want to add more that one binary digit quantities. The slow way to develop a two binary digit adders would be to make a truth table and reduce it. Then when you decide to make a three binary digit adder, do it again. Then when you decide to make a four digit adder, do it again. Then when ... The circuits would be fast, but development time would be slow. Looking at a two binary digit sum shows what we need to extend addition to multiple binary digits. Look at how many inputs the middle column uses. Our adder needs three inputs; a, b, and the carry from the previous sum, and we can use our two-input adder to build a three input adder. Σ is the easy part. Normal arithmetic tells us that if Σ = a + b + Cin and Σ1 = a + b, then Σ = Σ1 + Cin. What do we do with C1 and C2? Let’s look at three input sums and quickly calculate: If you have any concern about the low order bit, please confirm that the circuit and ladder calculate it correctly. In order to calculate the high order bit, notice that it is 1 in both cases when a + b produces a C1. Also, the high order bit is 1 when a + b produces a Σ1 and Cin is a 1. So We will have a carry when C1 OR (Σ1 AND Cin). Our complete three input adder is: For some designs, being able to eliminate one or more types of gates can be important, and you can replace the final OR gate with an XOR gate without changing the results. We can now connect two adders to add 2 bit quantities. A0 is the low order bit of A, A1 is the high order bit of A, B0 is the low order bit of B, B1 is the high order bit of B, Σ0is the low order bit of the sum, Σ1 is the high order bit of the sum, and Cout is the Carry. A two binary digit adder would never be made this way. Instead the lowest order bits would also go through a full adder. There are several reasons for this, one being that we can then allow a circuit to determine whether the lowest order carry should be included in the sum. This allows for the chaining of even larger sums. Consider two different ways to look at a four bit sum. If we allow the program to add a two bit number and remember the carry for later, then use that carry in the next sum the program can add any number of bits the user wants even though we have only provided a two-bit adder. Small PLCs can also be chained together for larger numbers. These full adders can also can be expanded to any number of bits space allows. As an example, here’s how to do an 8 bit adder. This is the same result as using the two 2-bit adders to make a 4-bit adder and then using two 4-bit adders to make an 8-bit adder or re-duplicating ladder logic and updating the numbers. Each “2+” is a 2-bit adder and made of two full adders. Each “4+” is a 4-bit adder and made of two 2-bit adders. And the result of two 4-bit adders is the same 8-bit adder we used full adders to build. For any large combinational circuit there are generally two approaches to design: you can take simpler circuits and replicate them; or you can design the complex circuit as a complete device. Using simpler circuits to build complex circuits allows a you to spend less time designing but then requires more time for signals to propagate through the transistors. The 8-bit adder design above has to wait for all the Cxout signals to move from A0 + B0 up to the inputs of Σ7. If a designer builds an 8-bit adder as a complete device simplified to a sum of products, then each signal just travels through one NOT gate, one AND gate and one OR gate. A seventeen input device has a truth table with 131,072 entries, and reducing 131,072 entries to a sum of products will take some time. When designing for systems that have a maximum allowed response time to provide the final result, you can begin by using simpler circuits and then attempt to replace portions of the circuit that are too slow. That way you spend most of your time on the portions of a circuit that matter.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/09%3A_Combinational_Logic_Functions/9.01%3A_Introduction_to_Combinational_Logic_Functions.txt
A decoder is a circuit that changes a code into a set of signals. It is called a decoder because it does the reverse of encoding, but we will begin our study of encoders and decoders with decoders because they are simpler to design. Types of Decoders Line Decoder A common type of decoder is the line decoder which takes an n-digit binary number and decodes it into 2ndata lines. The simplest is the 1-to-2 line decoder. The truth table is A is the address and D is the dataline. D0 is NOT A and D1 is A. The circuit looks like 2-to-4 Line Coder Only slightly more complex is the 2-to-4 line decoder. The truth table is Developed into a circuit it looks like Larger Line Decoders Larger line decoders can be designed in a similar fashion, but just like with the binary adder there is a way to make larger decoders by combining smaller decoders. An alternate circuit for the 2-to-4 line decoder is Replacing the 1-to-2 Decoders with their circuits will show that both circuits are equivalent. In a similar fashion a 3-to-8 line decoder can be made from a 1-to-2 line decoder and a 2-to-4 line decoder, and a 4-to-16 line decoder can be made from two 2-to-4 line decoders. You might also consider making a 2-to-4 decoder ladder from 1-to-2 decoder ladders. If you do it might look something like this: For some logic it may be required to build up logic like this. For an eight-bit adder we only know how to sum eight bits by summing one bit at a time. Usually it is easier to design ladder logic from boolean equations or truth tables rather than design logic gates and then “translate” that into ladder logic. A typical application of a line decoder circuit is to select among multiple devices. A circuit needing to select among sixteen devices could have sixteen control lines to select which device should “listen”. With a decoder only four control lines are needed. 9.05: Encoder What is an Encoder? An encoder is a circuit that changes a set of signals into a code. Let’s begin making a 2-to-1 line encoder truth table by reversing the 1-to-2 decoder truth table. This truth table is a little short. A complete truth table would be One question we need to answer is what to do with those other inputs? Do we ignore them? Do we have them generate an additional error output? In many circuits, this problem is solved by adding sequential logic in order to know not just what input is active but also which order the inputs became active. Encoder Design Applications A more useful application of combinational encoder design is a binary to 7-segment encoder. The seven segments are given according to: Our truth table is: Deciding what to do with the remaining six entries of the truth table is easier with this circuit. This circuit should not be expected to encode an undefined combination of inputs, so we can leave them as “don’t care” when we design the circuit. The equations were simplified with Karnaugh maps. Equation Collection Summary The collection of equations is summarized here: The circuit is: The Resulting Ladder Diagram And the corresponding ladder diagram: 9.06: Demultiplexers A demultiplexer, sometimes abbreviated dmux, is a circuit that has one input and more than one output. It is used when a circuit wishes to send a signal to one of many devices. This description sounds similar to the description given for a decoder, but a decoder is used to select among many devices while a demultiplexer is used to send a signal among many devices. A demultiplexer is used often enough that it has its own schematic symbol The truth table for a 1-to-2 demultiplexer is Using our 1-to-2 decoder as part of the circuit, we can express this circuit easily This circuit can be expanded two different ways. You can increase the number of signals that get transmitted, or you can increase the number of inputs that get passed through. To increase the number of inputs that get passed through just requires a larger line decoder. Increasing the number of signals that get transmitted is even easier. As an example, a device that passes one set of two signals among four signals is a “two-bit 1-to-2 demultiplexer”. Its circuit is or by expressing the circuit as shows that it could be two one-bit 1-to-2 demultiplexers without changing its expected behavior. A 1-to-4 demultiplexer can easily be built from 1-to-2 demultiplexers as follows. 9.07: Multiplexers A multiplexer, abbreviated mux, is a device that has multiple inputs and one output. The schematic symbol for multiplexers is The truth table for a 2-to-1 multiplexer is Using a 1-to-2 decoder as part of the circuit, we can express this circuit easily. Multiplexers can also be expanded with the same naming conventions as demultiplexers. A 4-to-1 multiplexer circuit is That is the formal definition of a multiplexer. Informally, there is a lot of confusion. Both demultiplexers and multiplexers have similar names, abbreviations, schematic symbols and circuits, so confusion is easy. The term multiplexer, and the abbreviation mux, are often used to also mean a demultiplexer, or a multiplexer and a demultiplexer working together. So when you hear about a multiplexer, it may mean something quite different. 9.08: Using Multiple Combinational Circuits As an example of using several circuits together, we are going to make a device that will have 16 inputs, representing a four digit number, to a four digit 7-segment display but using just one binary-to-7-segment encoder. First, the overall architecture of our circuit provides what looks like our the description provided. Follow this circuit through and you can confirm that it matches the description given above. There are 16 primary inputs. There are two more inputs used to select which digit will be displayed. There are 28 outputs to control the four digit 7-segment display. Only four of the primary inputs are encoded at a time. You may have noticed a potential question though. When one of the digits are selected, what do the other three digits display? Review the circuit for the demultiplexers and notice that any line not selected by the A input is zero. So the other three digits are blank. We don’t have a problem, only one digit displays at a time. Let’s get a perspective on just how complex this circuit is by looking at the equivalent ladder logic. Notice how quickly this large circuit was developed from smaller parts. This is true of most complex circuits: they are composed of smaller parts allowing a designer to abstract away some complexity and understand the circuit as a whole. Sometimes a designer can even take components that others have designed and remove the detail design work. In addition to the added quantity of gates, this design suffers from one additional weakness. You can only see one display one digit at a time. If there was some way to rotate through the four digits quickly, you could have the appearance of all four digits being displayed at the same time. That is a job for a sequential circuit, which is the subject of the next several chapters.
textbooks/workforce/Electronics_Technology/Book%3A_Electric_Circuits_IV_-_Digital_Circuitry_(Kuphaldt)/09%3A_Combinational_Logic_Functions/9.04%3A_Decoder.txt